Hacker News new | past | comments | ask | show | jobs | submit login
Misidentifying talent (danluu.com)
230 points by akaij on Feb 22, 2022 | hide | past | favorite | 161 comments



Funniest thing is that the companies with the most extreme leetcode interviews, Facebook, roblox, etc also fell the hardest in the most recent sell off. There’s even a quote by kieth rabois, one one the top VCs, that FB hired and built their culture around optimization, which has failed. They can’t innovate because innovative type people weren’t let into the company.


It makes sense.

How many creative and truly innovative thinking people do you know that will spend hours studying for and partaking in such a ridiculous interview process? It’s easily one of the top turn offs for me when it comes to FAANG companies. Not because it’s difficult, but because it’s utterly boring, and useless.

It seems these companies are far more interested in hiring very intelligent and analytical type people, but those who have no sense of outside the box thinking. Which in theory makes sense at scale, but in tech your head start will eventually fade, and those same people you’ve hired likely can’t bring you back.


Here’s what’s funny, amazon and Microsoft are less about leetcode. Google has very hard questions but they are somewhat novel. Facebook has very tough questions but they are basically straight from the book, people getting hired there just need to memorize.

Smaller growth stock companies like coinbase or DoorDash tend towards the Facebook style of interviews.

If you look at stock prices, there’s a near direct correlation between difficulty of leetcode and stock price.

But that maybe because smaller growth oriented companies just need fast memorized results on the job, or it’s just discriminatory


My takeaway has been that the fastest growing (especially consumer-oriented) tech companies put up the highest bars to hiring, simply because they'll still get enough candidates to end up with enough people. IBM can't be as picky as Roblox.


Does that mean one would find themselves in an environment surrounded by bright innovative minds should they go for IBM?


I am not endorsing the innovation idea here. I'm just saying that a well known company that's expected to see a big jump in value in the short term will have too many applicants to reasonably sort through. They can put up arbitrary bars, and arbitrarily high ones, and still manage to hire. Coinbase could disqualify everyone born on Mondays and Tuesdays without seriously impacting their ability to hire, probably


IBM might actually find some. I doubt they could keep them.

I have met a bunch of great ex-IBM people. But they measured their stays in months.


On the other hand, I know very smart people who have been at IBM for decades. It's a big company.


No, because Facebook and IBM aren't the same in terms of culture. My impression of IBM is that you need to file reports in triplicate every time you use a paperclip.

I have worked with and met people from several smaller, freewheeling startups who were either ex-FB, or turned down an FB offer. In all cases the smaller company had a high initial offer that was lower than FB, but also had a more chilled out culture where promos didn't mean the dog-and-pony show it reportedly is at FB (I wonder if FB is, ironically, more bureaucratic than IBM in this regard?).


Maybe look for the type of company you'd invest a lot of money in, where your skillset is considered to be "core business".


That's a texbook "correlation is not causation" observation.

How plausible is it that if you take any startup and populate it with leetcode winners, the outcome will be a high stock price?


>How many creative and truly innovative thinking people do you know that will spend hours studying for and partaking in such a ridiculous interview process‽

I may be posting this defensively, since I practice leetcode to prepare for tech interviews. But I disagree with the assertion that creative and innovative people don't have to put up with process and procedure. Pretty much everyone has to do grunt work sometimes, and it's especially important to "prove your worth" when first meeting a new person or company. The leetcode problems are far from useless, speaking as someone who has given more than my fair share of different types of interview problems. And furthermore the fact that you're willing to put in hours on a skill you don't particularly care about carries valuable signal to employers.


Intelligence is likely when you're knowledgeable but it is not a given.


Why are we assuming that short term changes in share price have anything to do with underlying fundamentals?

Meta prints money. They brought in $40B in net income off of $115B in revenue last year. Income attributable to Facebook + WhatsApp + Instagram doubled from 2019 to 2021. They spend more on research and development than they do on expenses attributable to their core business.

Yes, they're no longer growing. That was going to happen eventually. Yes, they're going to be targeted by extensive regulatory action. Yes, they're losing space - and users - to competitors.

It's going to take a lot to kill them off. The social media scene changes fast and they're going into decline, but it doesn't change the fact that Meta is an extremely large advertising company with a lot of money and talent. This isn't MySpace 2.0.

A $380B market valuation was likely unreasonable. A $200B market valuation for a company that earns $40B a year and is used by more than a third of the global population sounds very reasonable.


> Why are we assuming that short term changes in share price

I don't think the parent is making a strong claim in this direction, moreso that Facebook/Meta has very few new innovations and prefers to acquire them.

Other companies are also quite profitable or have strong market caps but are also not innovating and prefer to acquire new innovations (Oracle being a prime example).


> moreso that Facebook/Meta has very few new innovations and prefers to acquire them.

This isn't unique to facebook/Meta though: plenty of companies switch more toward the acquisition of innovation than the creation of it. Microsoft was like this for a long time. Apple, Google, and Amazon have also made their fair share of acquisitions over the years.

Introducing new products/value propositions is hard, even in adjacent markets, and most will fail. It's often easier to acquire something instead once you reach the point where you can afford to do so.


I read the comment as being critical of Facebook. "Not being able to innovate" seems like a critical statement, indicating a failure on their part. However, they are an incredibly successful company. If Facebook is successful not innovating, why would they want to hire innovators? Their hiring process should be specifically geared towards hiring the people that will execute the way Facebook wants to execute. Given that, they seem to have a successful hiring process.


> If Facebook is successful not innovating, why would they want to hire innovators?

There's a larger question that I'm still trying to work out.

Meta has something like 71K employees at this point and spends over $21B a year in research and development.

They don't give head counts for how many personnel are performing R&D work, but if you assume that most of their R&D costs go to employee salaries and that the mean salary at Meta is around 500K, it looks like at least half of their workforce is engaged in R&D.

What are they getting for this investment? Instagram, WhatsApp, and Oculus were all acquisitions. Facebook Messenger came out in 2008.

I get that some of them are working to optimize their advertising revenue stream. Does this require 20-30K+ highly qualified engineers and support FTEs?

Something else is going on. My guess is that it's a form of Parkinson's Law, only with IT engineers instead of British civil servants.


> I've been dabbling in AI recently, and I keep seeing FB's name appear all over the place. They were associated with some work that wrote human readable instructions on how to cook something- given only a list of ingredients. I have no idea why they are interested in cooking, but there logo was watermarked on every page of the deck. I've also read how they are building a really big and fast computer cluster to do ML. I don't know what they are doing on it, but they probably doing some R and some D. So, yeah, I have FB on my list of innovators.

Don't get me wrong. There's a lot of innovation going on at Facebook. That's guaranteed to happen when you have as much talent as they do.

Why isn't it translating over to their core business? Why do they have to continue to acquire companies (e.g., Oculus) to have potential areas for growth?

It doesn't make any sense. They would have more significantly success if they simply took half of their engineers, organized them into teams of 3-4 people, and gave each team $1-2M in startup capital VC style. Let's say they can create 2.5K teams. This would cost them $2.5B - $5B, or about than one tenth of what they spent last year in stock repurchases.


I've been dabbling in AI recently, and I keep seeing FB's name appear all over the place. They were associated with some work that wrote human readable instructions on how to cook something- given only a list of ingredients. I have no idea why they are interested in cooking, but there logo was watermarked on every page of the deck. I've also read how they are building a really big and fast computer cluster to do ML. I don't know what they are doing on it, but they probably doing some R and some D.

So, yeah, I have FB on my list of innovators.


This isn't relevant to the question you raise - but as an accounting matter - plenty of "keep the lights on" type engineering work is almost certainly identified as R&D in those figures. I suspect there is much smaller amount going to what most people would consider more transformative innovation or basic/applied research.


This seems like the perennial HN "I could build this app with 5 engineers" comment. Every visible FB feature (NewsFeed ranking, ads, Marketplace, ...) has a team, as well as less visible ones like Infra and Trust & Safety. Multiply that by all the different apps you've mentioned, and add people working on the new Metaverse, and 20-30K doesn't seem unreasonable.


>>If Facebook is successful not innovating, why would they want to hire innovators?

These companies have one cash cow business, and they want to hire the best people they can to keep the lights on. These people tend to go around selling their skills to other places who want the same.

That's pretty much all there is to it. Innovation isn't even the goal of middle management in any large people operation. If anything its the opposite, no body wants disruption in established order of things.


> They can’t innovate because innovative type people weren’t let into the company.

I am starting to think this is a broader trend in human societies and is probably the primary failure mode of human enterprise more broadly.

People who aren't inclined to creative rational problem solving must still produce labors for the masters if they do not want to starve in the street like the examples we leave there to suffer publicly as a reminder of the cost of disobedience.

So, they obfuscate and manipulate the perception of competence and disenfranchise the creative rationalists to survive in the forced competition for artificially scarce basic necessities.

This evolves incrementally until there is no remaining economic activity except theft and fraud, the creative rationalists evacuate to the frontier, everyone else follows as things start sucking less over there, and the cycle begins anew.


In two words, rent seeking.


There was an academic, Mercur Olsen, who had a point that a lot of human ills, be it capitalism, race riots, instability in the Byzantine Empire (and feudalism in general), et al, were mostly due to rent seeking behavior.


They also face denial of service attacks in the form of all sorts of nitpicks of presentation and ofc all funding sources have strings attached.


Well said. This is American society in a nutshell.


Are developers really counted on for innovation anyway? We seem like implementation units.

Someone else makes decisions and creates tickets for devs to work on.


Sounds like you're working at a more traditional company then: https://blog.pragmaticengineer.com/what-silicon-valley-gets-...

> In "traditional" companies, developers get work items assigned to them - most often JIRA tickets. These tickets are vetted by the product - or project - manager, and they have most key details to do the work. And they're expected to do just that. There's little need for questions unless it's about clarifying a detail in the ticket.

> Join a "SV-like" company, and you'll see little of this. There are projects, and there are program managers and engineering managers. But for the most part, engineers are expected (and encouraged!) to figure out the "how" of the work, including making larger decisions. In some places, each project would have an engineer leading it, who facilitates breaking up the work. At other places, engineering managers or senior engineers could do this work. Regardless of how it's done, all engineers are incentivized to look at the big picture, to unblock themselves, and to solve any problem they see.

> Engineers taking initiative is something "SV-like" companies celebrate. It's common to see services and features built that engineers suggested or have teams spend dedicated time paying off tech debt that people on the team advocated for. And it is uncommon for managers to tell engineers what exactly to do, to break down their work into small chunks or to micromanage them. People self manage.

> The expectation from developers at traditional companies is to complete assigned work. At SV-like companies, it's to solve problems that the business has. This is a huge difference. It impacts the day-to-day life of any engineer.


> Sounds like you're working at a more traditional company then:

Yes and I apparently always have, despite working for tech startups in several cases. Never thought that they would be considered "traditional", but they evidently are.

Quite interesting.


I've worked at both types of companies, and it's funny: People who have only worked in one style are surprised to learn the other style exists. I actually made a career switch out of engineering because I wanted to have more input into the product, rather than just sit at my desk getting showered with JIRA tickets. Of course, I switched out of engineering, and moved into a company with the other style--where the features were engineering-led. Just my luck.


> Sounds like you're working at a more traditional company

Isn't the traditional company approach having engineers in charge of engineering?


No.

Of course, there's always the definitional question of "what is traditional?" because you can obviously 'go back' to different time periods, and perhaps in one of them what you're suggesting was true. But in this case, we're talking about cutting-edge/high-prestige tech companies vs 'traditional' non-tech companies in a modern context.


Depends on your organization. I am but I choose to work in such environments.

Pounding at small broken out tickets under micromanagement structures is a combination of boring and stress ridden, to me.


Same here, I resigned recently from a company (nominally a startup) that behaves like a traditional company described here. After working in SV-like companies for 20 years, it was very difficult to be a leader in that environment, with heavy micromanagement and information filtering.


Yes, at a lot of companies (anecdotally 3/3 for me including a FANG) the developers are the main driver of the product development. Product managers help guide the process and directors set the cardinal direction, but the product innovation comes from the developers.


> Are developers really counted on for innovation anyway?

Not like in the 'old days'. A Google developer came up with Gmail, rememeber?

It seems like ideas come out of meetings at bigger companies. If a developer has an idea, he needs to create a startup.


This has not been my experience. Everywhere I've worked (a bunch of different kinds of places now) has been hungry for people who can help out with the broader more ambiguous direction-setting and decision-making needs of the company, but most developers I've worked with are not very energized by this kind of work.


Places I have worked have said they wanted these people. I was explicitly chosen for one job because they thought I would provide value there.

Joined the ticket clearing line all the same.


Everyone has different experiences I suppose!


Does Roblox have such a reputation? iirc the toughest thing they asked for was a parser/evaluator for arithmetic expressions.


Talents, like perfect spouses, are incredibly hard to find but they are there in plain sight.

The founder has to be dogged - in fact - dedicate his life and time to finding these individuals. But what would drive a founder like that if it is not in his nature?


> most extreme leetcode interviews

> They can’t innovate because innovative type people weren’t let into the company.

My personal experience is that there leetcode prep decreases as you go 1 std. deviation beyond the 'median FANG employee' level of talent. [footnote 1] This doesn't matter for the absolute best [footnote 3], who comfortably hop over any generic interview filter and tend to be heavily scouted even before they step into an interview.

The rub comes with the almost-best (think 1-4 percentile) candidates that that I know. (Many) Almost best candidates are busy working on their primary projects and end up insufficiently prepared for leetcode style interviews. A bunch of these brilliant engineers are falling through the cracks. (into slightly less prestigious jobs, but still)

This gets amplified once they join a job. They rise up the ranks fast, owing to their talent. But they're doing too much to find time for interview prep. 'Senior' interviews can be a difficult nut to crack, because they still use extreme leetcode style interviews when the interviewee has not used any of it in the last few years on the job. I find that such people only switch jobs when they are forced to; either by bad management or life circumstances.

Such people excel at projects, take-home assignments, system design interviews, SME domain interviews and any skill that is relevant to everyday job work. But, these are usually the final or pen-ultimate round. By that point, the decision is already made. Additionally, as long as leetcode interviews are the norm, there is no option between preparing for practical interviews vs leetcode interviews. So without consensus movement, the 'fix' would exacerbate the problem of time constrainted near-best performers vs stagnating [footnote 2] median performers.

[footnote 1] : I am beginning to see the utility in 'not so impressive 95th percentile' as a useful yardstick in discussions. Bugger, you got me.

[footnote 2] : That's not to say that those who excel at leetcode are stagnating median performers. Not a symmetric analogy.

[footnote 3] : think smartest kid in a top CS program, multiple standard deviations beyond


I want to agree with you but, uh, Google seemed to do fine, and they invented this BS.


Moneyball (book 2003, movie 2011) describes how Oakland Athletes build their baseball team in 2002 based on statistical analysis of player performance.

Then around 2008-2009, Houston Rockets, and Golden State Warriors (Oakland) started to apply similar analysis to basketball, and started to find the value of having quickly moving players who throw precisely, to score 3 point shots, even if these players are not as tall. Now it's called the 3-point revolution.

https://www.thestar.com/sports/basketball/raptors/2008/03/13...

https://en.wikipedia.org/wiki/Daryl_Morey#Houston_Rockets

https://ageofrevolutions.com/2019/02/25/data-science-and-the...

It hindsight, it feels like a long delay from the success of Moneyball, until others applies similar thinking to other sports.


The analytics revolution wasn't about short players specifically, the 3-point revolution aspect was mainly about selecting shots based on the expected value of the shot. The warriors in this aspect was very far behind the curve compared to Houston. The main takeaway for them was taking shots from farther behind the three point line, but they didn't have an emphasis on corner 3s (which are closer to the basket) or de-emphasis on mid-range shots (between the 3 point line and outside of 8 to 10ish feet from the basket). They (along with the rest of the league) were trending towards Moreyball during their championship years (particularly in 2015-16 if I remember correctly) but there were years where they mainly had extraordinary shooters.


Hockey has proven somewhat stubborn. Coaches will pull the goalie earlier now, and there is more of an emphasis on skilled players rather than nasty players, but I'm not convinced the latter wouldn't have happened in the absence of analytics. There was a wave of excitement when Kyle Dubas and a few other young GMs started hyping up analytics and new stats like expected goals and possession metrics and stuff, but as far as I can tell, it hasn't had a huge impact on the game.


I think hockey fans should count their blessings!

As a lifelong baseball fan and someone who works with data for a living the analytics revolution is killing the sport. People discovered that sacrifice bunts weren't worthwhile, but they were really fun to watch in a way that walks and strikeouts aren't. There wasn't enough respect for baseball as a cultural game.

I'm surprised that the league hasn't pushed harder for optimizing fan enjoyment in the same way that they optimized for winning.

I know less about basketball but I hear the same thing there, that 3 point shots have replaced a lot of interesting dynamics at mid range.

Interestingly, in football the analytics playstyle is more dynamic and exciting. A lot riskier and more explosive offensive plays compared to the traditional style. I wonder whether the league guided that (in terms of what data was made available, or by learning from the mistakes of baseball) or if they just got lucky.


I find high skill hockey more entertaining than bruiser hockey, so I'm happy for now! Hockey has had terrible eras in the past, like the advent of the neutral zone trap. There was an infamous game in which a defenseman just held on to the puck and skated around his own end for minutes at a time rather than try to get through the trap.


happening in American Football as well with more aggressive play calling on 4th down and in the red zone due to analytics. There are also teams using tools to calculate "game speed" vs traditional testing to help scouting be more accurate


What's crazy to me is that the more modern 4th down play-calling statistics has been known for at least two decades now; I first heard about it from my stats teacher in high school. The issue is that making the right call and it not working out would reflect more poorly on a coach than making the more traditional call and playing worse. It reminds me of the saying "No one ever got fired for buying IBM".


Young Sheldon in 2017: "Statistically always punting on 4th down makes no sense."

https://www.youtube.com/watch?v=zZffa_Z_Cxs#t=17s


Five Thirty Eight has written a bit about this in the past. That's pretty much what they said. No one remembers the risky play that paid off or the safe/expected play; they remember the risky play that didn't pay off.


Good discussion around the topic with Daryl Morey who, as you linked, started that at the Huston Rockets. I know absolutely nothing about basketball and found it very interesting, nevertheless.

https://www.preposterousuniverse.com/podcast/2021/03/15/138-...


Has this revolution happened in European football yet? It seems like a harder to quantify sport (less episodic, more flowing, so harder to quantify positions)


There's been much more data-driven decision making / data science applied to the sport, but no huge watershed moment like in baseball or basketball.

I think that has to do with the fact that compared to the mentioned sports, soccer doesn't really have any single pivotal roles, that can make or break a match, which you can then optimize for/around.

Fundamentally speaking, soccer is a pretty slow moving game as far as scoring goes. One goal can be enough, and teams either play defensive or offensive games - while the other team tries to find weaknesses around their plays.

From what I've seen, a lot of the data science around soccer goes on things like player condition, predicting what players to swap out around games, analyzing weak / strong patterns in the strategy / playing style, and similar. I've seen this been done via camera usage and machine learning, analyzing ball position, player position - player performance on and off the field, and similar stuff. But this is 5-6-7 years ago, so probably outdated information...the info came from a guy that worked for a ML research group, who in turn had people that consulted for certain European clubs.


Even at the semi-professional soccer level, coaches are making use of analytics. For example, they get information about where other teams are - or who is - losing the ball. Then, the game was pushed toward extreme (and boring) control of ball possession largely by analytics. When I played at a decent level, it was considered "criminal" to pass the ball in close proximity (up to 20 yards) to the net while being "pressured" by the other team. Nowadays, it is very rare to see a team that gives up possession easily.

AC Milan had been using "Milan Lab" since the early 2000s and it was whispered that it used neural networks to predict the probability of injury. It was a largely unsuccessful experiment at the time-players were very often injured (also, Milan's head of the medical/performance team, Meersseman https://www.jpmchiropratica.it/en/studio/jeann-pierre-meerss..., was pulling players' teeth because he believed that some teeth could give postural problems and thus increase the likelihood of injury, see Seedorf).

The analytics/data science revolution has already arrived in soccer, but the reasonably large effects have been, due to the nature of the game, less visible than those in American football and baseball.


I don't think it is quite the same and I don't have any hard data to point to, but it seems players are being judged by the number of game speed minutes on their legs more than they have in the past. IE, a 26 year old who has been playing since he was 19 might be a star, but will not be worth as much as a he might have been 10 years ago when it was assumed he was just entering his prime.


I would say that the Pep Guardiola led Barcelona already had this watershed moment, with their possession and passing focused play, built around tiki taka and dependence on phenomenonal playmakers like Xavi, Iniesta and Messi. No other team has managed to replicate this success so far though, not even Barcelona itself after Guardiola left and Xavi/Iniesta retired.


Basketball now uses SportVU tracking data. A camera system records the game, and computer vision algorithms extract the locations of the players, and the ball, at high space and time resolution.

https://en.wikipedia.org/wiki/SportVU

https://arxiv.org/abs/1703.07030

SportVU was originally developed with European football in mind, but I don't know whether it has found use there or not.



This section in that piece about undervaluing training ("trainingball") is priceless.

I worked somewhere where someone constantly brought up Moneyball as a way of getting an edge in hiring. They later became head of the unit, and it became clear they had absolutely zero ability to manage other people, because they thought of them as consumable goods. They would level scathing reviews of people working under them that they were "poorly mentored" as if this was an attribute of the employee in question, not at all recognizing the irony in what they were saying.

Moneyball was an interesting observation, but it's also telling because it is implicitly based on a number of assumptions that plague modern workplaces, such as treating people as fixed, unchangeable objects rather than as persons that can learn and adapt.

The comment by that author about "trainingball" is noteworthy because it points to the importance of training but also potential as a selection factor. And not just "potential" in the sense of "new and undeveloped" but in the broader sense of "this person has the attributes and background to be what we need if we just work with them." I think that attitude was much more common in the past maybe, but I wasn't alive then.

There's also a different and many understudied problem, which is institutions failing to acknowledge and/or recognize when there have been significant changes in the institution, and when the employees aren't communicated with about these things. Sometimes training isn't so much training as it is just communicating with people.


Happening in golf now too.


How is it applied to golf?



Not just this, but the prevalence of strokes gained being the key metric to focus on for players has opened up a new understanding of what it takes to score well and win tournaments.


It’s striking that so many stories about the triumph of metrics over intuition involve games. I’m convinced that you should put statisticians in charge if your goal is to win a game. I’m not convinced that related behaviors, like constructing a game adjacent to the problem you’re actually trying to solve and treating game performance as a proxy (like LeetCode interviews! Or KPIs!) are helpful.

It’s also interesting that he brings up the blind audition here in the appendix. Blinding can of course remove biases about the person from the process, but it is still subjective. To make the “can’t manage what you don’t measure” point we would need to show that orchestra performance got better after we developed an algorithmic way of deciding how good an audition performance is. After all millions of dollars are on the line, are you really going to listen to some credentialed old man about which one he liked better? So backwards!


>I’m not convinced that related behaviors, like constructing a game adjacent to the problem you’re actually trying to solve and treating game performance as a proxy (like LeetCode interviews! Or KPIs!) are helpful.

The devil is as always in the details. At least as far as KPIs go, they can be very useful if they are chosen appropriately and if the senior team members have all bought in to the concept. Example:

I used to work in military search and rescue, and the pilots get tasked by a complex bureaucracy indirectly descended from NORAD, and they have a requirement in most areas to be airborne within 30 minutes when called. Thus, our maintenance flight had as its first and foremost KPI chosen "percentage of the time where maintenance can't get an aircraft ready to be flying missions in less than 30 minutes".

That's a fantastic KPI because the maintainers that worked for us were all on board. Getting that KPI as close to 0 as possible (and we did hit zero every few months) was a goal that everyone had bought into, because of course if you can't put an aircraft in the sky then it potentially means people are dying unnecessarily.

Now, any AME or military aircraft tech can tell you the paperwork requirements for aircraft maintenance are very strict and at times quite onerous, but they're required by federal law. So the same maintenance organization also had, broadly, "paperwork error rates" as a tertiary KPI, because of course the maintenance paperwork gets audited by an AS9100 or similar quality assurance shop, usually in-house.

That one is a shitty KPI because the wrench turners who work on complex jobs are more likely to make errors in the paperwork, so there's a perverse incentive wherein your best maintainers make about the same number of mistakes as your journeymen right out of school, simply because it's easier to fill out the paperwork properly for topping up an oil reservoir compared to changing a propeller or a weight-and-balance.

All that to say KPIs themselves are not the problem. Shitty leaders who choose KPIs improperly are the problem.


I agree with everything you say except the bit where the failure rate is a fantastic KPI. It has room for improvement.

> they have a requirement in most areas to be airborne within 30 minutes when called. [...] "percentage of the time where maintenance can't get an aircraft ready to be flying missions in less than 30 minutes".

The problem with this KPI is that it's completely arbitrary. Why does it have to be 30 minutes, and not 25, or 35? Changing this threshold might change your failure percentage significantly. This KPI is not actually a ruler that measures your performance – it's using your performance as a ruler to measure the threshold, because the easiest way to change the KPI number is by changing the threshold, not the performance.

After all, wouldn't 25 be better than 30? The only reason it's 30 is that someone at some point said it has to be 30.

An arbitrary cutoff limit like 30 minutes (what Deming would call a numerical goal without a method) has one of three effects:

- Either it's really hard to accomplish, in which case the large failure rate is demotivating, or

- It's just about what you can do, in which case it's meaningless anyway, or

- It's easier than what you could technically do, which might inspire dilatoriness: "You can't complain; we're meeting our goals!"

Instead of comparing yourself to arbitrary thresholds, use the raw metric as a KPI instead. Use "minutes until airborne" as the KPI. That you can optimise indefinitely, and it is an honest representation of your current process, not how it compares to a number someone threw out at some time.

I know, I know, the 30 minute number is not entirely arbitrary, it's probably related to how easy it is to keep a person alive. But it's arbitrary in the sense of being a cutoff – for keeping people alive, shorter is always better. There's nothing magical happening at exactly the 30 minute mark.


> Instead of comparing yourself to arbitrary thresholds, use the raw metric as a KPI instead. Use "minutes until airborne" as the KPI. That you can optimise indefinitely, and it is an honest representation of your current process, not how it compares to a number someone threw out at some time.

This is an example of a more general phenomenon - when you have outcome buckets, it's generally better to do your modeling on raw outcomes ["it took us 32 minutes"] than to try to do it on the bucketed outcomes ["we didn't make it in 30 minutes"]. Bucketing throws most of your information away.

Andrew Gelman talks about this all the time in the context of election modeling, where it's better to predict how many votes something will get, which is a continuous variable (and then, based on the predicted vote, predict whether the thing will win or not), rather than trying to predict whether the thing will win, which is a discrete variable.


> Use "minutes until airborne" as the KPI.

Average minutes to airborne? Median? Mode? 90th percentile? 99th percentile?

Optimizing for each of those would result in a different process, different outcomes, and quite possibly worse outcomes for the stakeholders than "proportion under 30 minutes" which is just a sixth way to slice the same data.


> Average minutes to airborne? Median? Mode? 90th percentile? 99th percentile?

Facetious answer: yes.

More useful answer: you want to record the full timeseries of individual data points. From this you can derive mean, median, mode, 90th percentile, 99th percentile and any other statistic that is shaped like hole in your cost–benefit calculations. Including trends and changing variance.

For reporting purposes, the upper process behaviour limit is probably a good one as "the" numeric value of the KPI, but the true value is in the timeseries.


Did you mean to respond to my parent comment?


30 minutes is what the 4-Star General who runs NORAD had decided s/he wants to see as a response time.

It's also just about the fastest you can get a Hercules airborne and still include time for flight planning. It takes almost exactly 22 minutes to start a Hercules by the book, and that leaves 8 minutes for taxi and takeoff.


30 minutes is not arbitrary in the context of search and rescue. When someone has fallen overboard on a Naval vessel in most parts of the world's oceans their survival time is as short as 30 minutes: https://ussartf.org/cold_water_survival.htm

In rare circumstances the water can be much colder than average and this drops below 30 minutes, but for the most part the military operates in waters with survival time no lower than 30 minutes.


That makes no sense. If survival time is 30 minutes, and it takes 28 minutes just to get into the air, nobody will ever be successfully rescued.


On the one hand, choosing an arbitrary but good enough target avoids wasting effort on overoptimization and ensures a focus on the worst case, not on the best case; on the other hand improvements are not elastic: maybe rearranging parts and supplies to make them more accessible is enough to make 28 minutes as likely as 30 before, but getting to 26 minutes could require expensive tools and 24 minutes would require different, easier to service aircraft.


Yup. Optimisation is always a trade-off problem that needs to be approached intelligently, not mindlessly. The raw metric of minutes to airborne leads naturally to the intelligent points you raise about what each minute is worth.

The arbitrary target implies a cost function that is infinite below 30 minutes, and zero above it – this, if anything, suppresses intelligent discussion and leads to wasted effort.


> Instead of comparing yourself to arbitrary thresholds, use the raw metric as a KPI instead. Use "minutes until airborne" as the KPI. That you can optimise indefinitely, and it is an honest representation of your current process, not how it compares to a number someone threw out at some time.

You are completely right, but this is much harder to explain to a regular person who doesn't have a math/stats background.

"Get the aircraft in the air in 30 minutes" is much easier to understand, even if the underlying assumption is that the number can be anything – especially when, as you pointed out, there are some reasonable biology-based reasons for 30 minutes.

"Get the aircraft in the air as fast as possible, but then we'll try to optimize that number" is inviting confusion ("Wait, optimize? But we're already working as fast as we can!).

Of course, the tradeoff in the hard threshold is that people might stop improving after hitting the 30 minute threshold. In my opinion this is at least partially based on cultural factors – you should be working in search and rescue at least partly because you want to help people, and should be willing to optimize further to accomplish that goal.


>All that to say KPIs themselves are not the problem. Shitty leaders who choose KPIs improperly are the problem.

Deciding to govern by KPIs presupposes that we have KPIs that are chosen well enough to beat subject-matter-expert judgement. I think if we are talking about anything other than a game with very clear rules and observable outcomes, that's an extraordinary claim.


>Getting that KPI as close to 0 as possible (and we did hit zero every few months) was a goal that everyone had bought into

So what corners used to get cut to meet that target? What extra costs were incurred?


You don't need an algorithmic way to decide how good an orchestra is, because subjective reception from critics and audiences is exactly the point.

And there's a fair amount of innovation in that. Audiences want novel-but-not-too-novel interpretations.

So not only there is no way to define a definitive objective metric for musical interpretation, it's even harder to conjure up an objective metric space for "possible but unusual good" interpretations.

As go orchestras, so go individual players. You absolutely are going to listen to the opinion of an experienced conductor, because being able to tell good from bad and having some insight into compatibility with existing players in the rest of the team is exactly what the job involves.

To make it worse, it's culturally dependent, and it shifts. Good today is not the same as good fifty years ago.

This actually makes it easier than software, because it very much is just about informed opinion, with external subjective feedback from audiences and critics.


>subjective reception from critics and audiences is exactly the point

Clearly software is aimed at subjective reception by its customers. We also have the adage "programs are meant to be read by humans and only incidentally for computers to execute" -- to the extent you believe this, subjective reception of the code by maintainers also matters. For these reasons I think subjective evaluations and "taste" for software / software engineers is undersold.


> It’s striking that so many stories about the triumph of metrics over intuition involve games. I’m convinced that you should put statisticians in charge if your goal is to win a game.

I enjoy playing around with baseball data. About a year before the pandemic, I went out and presented some research at a Sabermetrics conference hosted by the Boston Red Sox. 90% of the pro teams had their data science teams present, and I spoke with many of them (FYI - working in baseball is like working in video games: long hours and low pay).

Anyway... The Red Sox (at that time, at least) divide their analytics department into 3 subject matters, and I spoke to the lead of one of those 3. I asked how different their analysis is compared to all the old baseball knowledge developed over the last 150 years.

He said that it's not different! All the intuition about the game built up by the old-timers spending decades in the industry still holds! The analytics is all about tweaking things and making slight improvement in the odds. If they can improved the odds of a beneficial outcome by 2-3%, they consider it a success, and if they can do that consistently over the course of a season they will win 5-10 extra games. That's it! (SPOILER ALERT) Remember that in the movie Moneyball, the Oakland A's didn't win the World Series. They didn't even make it to the World Series. They were beaten in the first round of the playoffs by another small market team (Minnesota) with a similarly small payroll that wasn't doing the moneyball thing ;)


It’s interesting that your anecdote lined up so well with the example from the OP about people claiming that the metrics didn’t change much. Though those people were then giving ‘supporting’ examples that support the opposite of their statement. It may be that the person you talked to worked on eg play strategy which could have been affected totally differently from scouting.


I was intentionally vague about who I spoke to, to keep some amount of anonymity for him ;) The question was about baseball in general and not a specific aspect of it.

As far as play strategy goes, it's actually really interesting that baseball does not allow for real-time analytics in the game. You can print out some cards and give them to players to keep in their pocket (baseball uniforms have pockets!) and they can refer to them during the game. You can give a binder full of information to the coach and he can keep it in the dugout. But once the game begins, you cannot communicate strategy to the players and coaches. That's quite different than most sports.

What is interesting about scouting/player development vs play strategy is that the "revolution" in scouting started 100 years ago! Branch Rickey, most famous for signing and playing Jackie Robinson with the Brooklyn Dodgers, began creating the modern farm system for St. Louis in 1919 - with a lot of derision from other teams. But funny thing - from that point on (until he left for Brooklyn) nobody but the Yankees won more games than the Cardinals. The Chicago Cubs and Philadelphia Phillies at various times hired people to collect data about young players to use for skill development and practice recommendations, as well as personnel decisions. The shape of a guys butt and the attractiveness of his girlfriend are actually data driven heuristics!!!


> I’m convinced that you should put statisticians in charge if your goal is to win a game.

Not just you - this is the concept of a casino, bookmakers, etc. You can make consistent profit by having good stat/prob models of games that people play for money but are irrational about.

More generally having accurate models gives you an advantage in almost any important situation.


Statisticians are not going to win a game. They may win more than the average but then, when everyone is a good statistician, the mean changes and money leads again.


I strongly agree with this. Every programmer who became "upper management" also happened to be 6'1"+. Shocking.

Funny enough, the shorter programmers were generally much more competent from having to actually earn their place. If you have to pick a doctor or lawyer on looks alone, I recommend picking the shorter guy (or the woman!)


As someone taller than 6'1", I will say this advantage is, well, shrinking with the advent of WFH culture.

Multiple people have told me I surprise them with how tall I am when we finally meet in person, mostly because I keep my camera at the top of my computer, which gives a slightly down-leaning angle, as if they were looking down at me.

This could be used to a real advantage by putting the camera under the monitor. It's also not quite a flattering look so I avoid it, but I wonder if this is to my detriment...



Are you sure they're getting the idea that they can look down at you from the fact that your camera looks down on you from above your screen, as opposed to the fact that they themselves literally look down on you as you appear on their screen?


Good point, if they use laptops? Most people have their desktop screens at eye hight right.


I suspect (without any data) that laptop use dominates external monitors among the post-2020 WFH crowd.


Someone will write an article on how it's a power move to scale your Zoom background to make you look taller.


Let’s imagine hypothetically there is some circuit genetically encoded in every human’s brain that gives +10% probability of agreeing with someone if they are a tall male.

Let’s also imagine hypothetically that in many situations, the speed and strength of consensus itself has value beyond the quality of the decision reached, so eg. quickly reaching a strong consensus on the 2nd-best solution would be better than fighting a drawn-out battle to reach a strained consensus on the best solution.

In this (very hypothetical) world, it’s optimal to favour, by some amount, tall males over higher-performing non-tall-males.


I was pondering this recently and I wondered if ones relationship with their father as a small child impacts this effect as well. If as a kid one looked up to their father, literally, and had positive interactions, well that's some pretty deeply ingrained feedback loops there. Yes this is pop psych nonsense, but one does wonder.


The cynical side of me says it's one's hindbrain may be subconsciously whispering "This person is bigger than I am and could beat me to death, it's better not to antagonize them, or 'compete' with them in any way. They look like they'll always be on the winning side, so that's where I'll be".


Hypothetically, genetic engineering seems to outperform in such a world.


The conclusion would also be hold if the "circuit" was cultural.


> If you have to pick a doctor or lawyer on looks alone, I recommend picking the shorter guy

Well, there is a consideration for management positions that says an appearance that evokes deference is a genuine qualification for the job.

And while it would be difficult to make that argument for doctors, it's not at all difficult for lawyers.


> A complaint I've heard from some folks who are junior is that they can't get promoted because their work doesn't fulfill promo criteria. When they ask to be allowed to do work that could get them promoted, they're told they're too junior to do that kind of work. They're generally stuck at their level until they find a manager who believes in their potential enough to give them work that could possibly result in a promo if they did a good job.

One bit of advice I've given many people I've mentored in the past is to not wait to get assigned the "promo project." It's hard for people to object with "your too junior to do that kind of work" when you've shown them a plan that's on par with what a senior engineer would provide. Sometimes, they might do that anyway, but then the problem lies not with getting the right promo project, it lies with not having a manager who will be a partner in developing your career.


A good senior engineer produces a simple solution to a simple problem. People manufacturing complexity for the sake of promo packets is a huge problem, and managers are often complicit in it because of those career-development goals.


If you’re in an environment where doing complicated things rather than valuable things is incentivised, I think it’s mostly reasonable to do the incentivised thing or to get into a different environment


I wonder if some small part of the problem lies in the word "talent" . The implication is that developers are born with some sort of innate ability to write software well rather than coding being something you can learn with practise.

Calling people "talent" suggests recruitment is a hunt for some person with a rare underlying ability rather than just looking for someone with a skill they've learned. This could also go some way to explain why many small software companies refuse to recruit juniors and have top heavy structures. If you can't recognise the magical rare ability in people then you have to trust other companies experiences with "talent". Taking on people and nurturing them to be seniors wouldn't even be an option.


I can speak from the chess world, as a coach. I used to believe there was really no such thing as talent largely based on my own experience as I started relatively late, don't consider myself talented, and reached a reasonably high level at the game. I kind of still do think there's no such thing as talent, but that comes with a bunch of asterisks.

If you take two people and expose them to normal training regimens, different people will have different results - and frequently extremely different results. One could argue that the person who performed worse might have performed better if given exposure to a different regimen but in practice I find those that tend to overperform in one regimen tend to overperform in other regimens as well.

The catch is that the outcome is largely predicted by how the student responds to the training itself. The person who finds it interesting and enjoyable is going to dramatically outperform compared to the person who may be highly motivated by an outcome (wanting to become a good player) but doesn't especially enjoy the process - tactical training, game analysis, etc.

And so talent may not just be some innate ability at something but rather an innate preference, which is going to be the product of a million other things, for enjoying certain things. An amusing anecdote in the chess world came from retired World Champion Vladimir Kramnik. He recently casually remarked that now that he's retired he's spending less than 50% of his time with chess, as if as if he was just kicking back and neglecting the game by only dedicating 40% of his life to it.

Obviously there are some genetic factors like the ability to visualize/concentrate/etc that people usually think they are talking about when referring to talent. But the role those play will only start to matter when people reach some level at least vaguely approaching their potential, but they'll never get anywhere near that level without the thousands of hours of work and training that they'll likely never complete without a love of that work itself.

This is probably just a really long-winded way of repeating Thomas Edison, "Genius is 1% inspiration, 99% perspiration." The only catch is whether you have the capacity to put out that 99% perspiration.


I think there's a massive difference between coaching people to play competitive chess and teaching people to be good developers. For a start, in chess you only really start making good money from it when you're exceptionally good. Most chess players only ever play for their own satisfaction.

The same is absolutely not true for developers. You might well need talent to be a chess player who makes a living from chess, but there is no evidence that developers can't make a good living from their skill without having any innate talent in an area of software development. I've met and worked with plenty of good developers who essentially write code by brute forcing solutions to problems combined with lots of Googling. We all do it.


That depends on your definition of good money.

I've heard Ben Finegold say that 30 years ago only about top-30 could make anything resembling a decent living in a developed nation, while nowadays, even adjusted for inflation, the prize pools are much bigger and one of the biggest chess channels on YouTube is a player who's not even a GM.


What's fascinating to me is that the article makes no mention of female performance in chess whatsoever, which has historically been underwhelming compared to men, seeing how the strongest female player ever never made it into even top-4 of the Candidates tournament, let alone played for the Championship.

Since height and athletic ability plays almost no role in chess, can you perhaps explain why women have underperformed?

Also, I have no idea why WGM (Woman Grandmaster) title is even a thing, since I cannot possibly think of a more discriminatory "consolation prize" type of title in a field where men don't have an inherent physical advantage. Curious to hear what is your take on this as well.


I wouldn't say they have "underperformed", the best women chess players would absolutely wipe the floor with me and other casual players. However, historically women don't reach quite the same top level as men in some mathematical tasks, even though their median performance is on par or even better sometimes. You can visualise it as two normal distributions with the same mode, but one is slightly wider.

For now the WGM title exists, because if it didn't, there would be no women Grandmasters. While you see it as a consolation prise, I see it as an encouraging step to build a tradition of unisex chess participation so that one day we can have truly equal tournaments.


>there would be no women Grandmasters.

Why bother commenting at all without bothering to put in effort for even a cursory search?

There are 39. Including 2 last year. Full GMs, no prefixes.


There's a bigger difference between men and women than between short and tall men?


The article omits any discussion of this point, that is why I asked the person familiar with the field whether they have insights to share.


I liked this essay regarding the nature of talent. http://rittersp.com/wp-content/uploads/2014/03/Chambliss-Mun... Like much social science, one should be careful to not assume that any results would generalise to more cases, but I still think some of the ideas are interesting:

- When people talk about talent, it is some relatively immutable quality people have inside themselves that creates excellence. When people talk about observing (eg spotting or wasting) talent, they are really talking about observing excellence or high competency. Talent of the first kind is mostly not a thing (but eg some people may be predisposed to certain kinds of athleticism).

- Being excellent at one thing needn’t come with excellence or deviance at anything else. I think many people on HN will quite easily understand that typical celebrity recommendations (even if unsponsored) have no particular reason to be credible, but another aspect of this is the idea of the weird or eccentric genius which we sometimes see applied to programming (and I think less applied to eg being a lawyer or doctor).


"Talent of the first kind is mostly not a thing (but eg some people may be predisposed to certain kinds of athleticism)."

- Statements like this work reasonably well on HN and stupendously on the very receptive New York Times audience, but fail miserably when the first real-life punch lands (figuratively) on the speaker's jaw. "Talent" or "genetic quality" or "genetic predisposition" when considering a skill or activity is a fact of life and has been demonstrated in countless studies of twins. I studied fish and birds for a long time and even there the main confounder when studying the effects of environment on life histories or physiological functions was "individual quality", the fact that just about everywhere certain individuals do much better than others for reasons other than "lucky breaks".

I've also seen people run very fast (much faster than me), draw very well (I draw houses and people the same way), tune their guitar by saying la-la (whereas I can't tell the difference between a piano and a church bell).


Yeah I meant to mention that that opinion is likely something that was east for the author to read. I think it is also personally useful to believe it and therefore that one is somewhat responsible for one’s successes and failures.


After a couple of decades I have decided for myself talent is real and not something that can be replaced with practice. I've had juniors surpass me in 2 years and I've worked with seniors that learned very little despite working mode decades than me. In my definition of talent I take into consideration not only intelligence but also drive, enthusiasm, work ethic, etc.

Talent becomes more and more important as you require actual design and architectural thinking. For the average code monkey not so much.

As for why a lot of companies refuse to hire juniors: they think practice is more valuable than it actually is and they don't want to spend resources on juniors getting that practice on the time they pay for. And the people involved in the early stages of recruitment are also not technical or passionate about tech so they can't recognize when somebody is or isn't.


I think it is pretty clear that we aren't complete tabulas rasas either. The exact "nature vs. nurture" split isn't known, but it very certainly isn't 0:100.

But the field of software talent is pretty wide. Someone can have a knack for identifying the core of the issue fast (intuition), someone can come up with unorthodox solutions (creativity), someone may be able to see possible security implications sooner than others, yet another is fast learner in general but not very creative etc. These may not be the same people.


I wouldn't read too much into the word choice. Not so long ago, I learned that my wife uses the word differently than I do. For her, a talent is simply a learned skill.


Pure stats work in circumstances that are repeatable, easily quantifiable and where you have a huge dataset to learn from past experience. That's why baseball is such a perfect example. It's just not so easy to measure how good someone is at solving novel problems, compared to how well they can hit or throw a ball in perfectly repeatable circumstances.


I think that the lesson from baseball is that people spent a long time basing decisions on measurable but irrelevant things like the shape of a player’s bottom, and they thought they were being clever and reasonable and making good decisions. Lots of software development isn’t solving particularly novel problems but it can be quite varied, and certainly performance can be hard to measure and output can be dependent on things outside of one’s control (eg if an experimental project doesn’t pan out). But I don’t think that should mean that one should give up and judge people based on things that seem reasonable but don’t work (eg apparently algorithms style questions) or actually irrelevant things like (probably) height or (presumably) the shape of someone’s bottom.


Also, work is not a contest, or a competition.


The sad point is that in the current state, you get two leetcode style questions for round interview, and that's it. Good luck with that;

There is 0 interest in companies to improve the quality of the interview process. 0. They'd rather increase the funnel, instead improving the rate of good talent detection.

There is enough new fodder to go through every year, that improving quality of detecting good engineers, is not that important.


> you get two leetcode style questions for round interview, and that's it. Good luck with that;

that is really good luck. Where else can you get a relaxed "i feel like taking the afternoon off (my personal take back then when i was windsurfing - the wind has picked up very good, and thus i'm gone)" $500K+/year just for memorizing a bit of stuff? We have it best in the history of humanity. Yet still the whining. Until one has a disability preventing mastering leetcode, the failure to do so clearly indicates something like attitude issues and/or lazyness and/or lack of focus/discipline. Hard to argue about usefulness of such a filter.

>There is enough new fodder to go through every year,

some say that AMZN in some markets has practically exhausted the candidates pool, so lets see whether it would cause any changes


> Until one has a disability preventing mastering leetcode, the failure to do so clearly indicates something like attitude issues and/or lazyness and/or lack of focus/discipline.

Or too busy with their current work, to memorize the strutting brogrammer answer key.


> $500K+/year just for memorizing a bit of stuff?

This seems to be understating the effort required to actually get hired.

I doubt anyone can just do Leetcode during their free time in the weekend and expect to pass most of the tests.

It also doesn't take into non-technical obstacles like being good in communication, being the "right" culture fit, etc.


I'm on a pretty small team, we don't have enough people or time to do a robust interview process, but we started contracting devs for a week before hiring them, and it has been a much better experience than any interview I could have done in 1-2 hours. Plus, they get paid and we get a little bit of work done even if things don't work out.


Wouldn't this self select for the unemployed? Unless they took a week off and then gave notice after the "contract" period.


Perhaps this is a good thing?

To be cynical perhaps an unemployed person has less negotiating power, so you can get higher quality diamonds in the rough cheaper.

The classic fear is the person is unemployed for a flaw in that person, but perhaps this trial alleviates that somewhat?


I think it would also filter people who are just looking around or brushing up their interview skills.

Whether that is good or not, I leave it up to you.


This eliminates everyone currently employed but looking for another job from your hiring process. Seems like you're severely limiting yourselves with this tactic.


Being employed also makes candidates more attractive. If you don’t have many resources, why waste them trying to compete for the same subset of the pool of good candidates as everyone else when you can get better efficiency going for the candidates which the competition undervalues?


That seems like a very self-defeating attitude; sure, maybe your pay isn't at FAANG levels, but maybe something else about your company (less demanding hours, better benefits, generous vacation, tuition assistance, better business practices, better development practices, etc.) will attract a candidate. Maybe they're someone transitioning out of a different field but they have the skills you need (I'm self-taught and worked as a mechanical engineer before I was hired as a developer, for example).

You don't have to compete on pay to attract good talent, but refusing to compete on the least amount of time wasted will only attract people with a certain set of life circumstances that have no bearing on their skillset.


I think I did a poor job of explaining myself above, so let me try again.

If one accepts the premise of the OP that typical interviewing processes are bad at identifying good recruits, and if one believes the trends implied by eg hiring behaviours of big companies and some hn comments, then that implies:

- there is some subset of candidates who are attractive to big companies and those companies mostly compete with each other for those candidates.

- there are candidates who would make good hires who are (incorrectly) identified as uninteresting by these large companies

And so:

- if you are better at identifying people who would make good hires than the big companies, you will be able to hire people big companies would overlook.

- on average your offers are more likely to be accepted by people with fewer offers by big companies (even if you can pay as well or better this is still true because if a candidate has more options they may choose one that isn’t yours for reasons you can’t control)

- so your recruiting will be more cost-effective (in terms of spending less resources finding candidates and, optionally, paying new hires less) if your process mostly finds people who are (a) likely good hires and (b) more likely to accept your offer (because they don’t have so many options).

If one believes that the toplevel poster in this thread finds good candidates by having week-long trial periods then one should also expect this process to be more cost-efficient as it naturally selects for candidates who are less likely to be competed for by other companies (because those companies mistakenly think that unemployed people are unemployable)

You don’t need to pay those people less; the cost being optimised for here could be the internal resources spent on the hiring process which can be high even if you have large teams.

I think one thing I struggled with for a while was that I implicitly felt that companies ought to be designing recruiting processes that could fairly identify any candidate who would be a good employee, but really it is sufficient for a company to come up with a process that gets sufficiently many hires at a reasonable cost and doing things like reducing the applicant pool with arbitrary restrictions like job history or pedigree may reduce the cost of applying the later stages (because they would be applied to fewer people) even if it increases the salary they are hired at (because eg they are using the same arbitrary restrictions as many other companies).


the one theory i read about on the correlation between height and managerial position is that, people's confidence correlate highly with height when they were teenagers (I am assuming that people's height when they are teenagers also highly correlate with their final, adult height), and in a corporate environment, marginally more confident goes a long way -- way more than marginally better at programming or building spreadsheet.

And as someone who is in a bit of a managerial position myself, and having watched many others try to manage to various degree of success, I'd say this translation between confidence to managerial responsibility is a feature not a bug. You really do want to put a confident person in charge if only that others are more inclined to follow her. An adequate plan resolutely executed is much better than a somewhat better plan timidly presented.

check out this old HN thread on "rich kids already won the career game" https://news.ycombinator.com/item?id=2296550


> marginally more confident goes a long way

...not sure. My own piece of anecdata here is that healthy self confidence is not really correlated with better career opportunities at all in our industry. If anything, maybe even the opposite is true. In my mind, companies promote people who have internalized the pre-existing power structure and perpetuate it [1]. These tend to be of the "insecure overachiever" [2] type, rather than independent thinkers with healthy self-confidence.

[1] https://www.amazon.com/Disciplined-Minds-Critical-Profession...

[2] https://www.amazon.com/Leading-Professionals-Power-Politics-...


Hard disagree. Being a large, attractive person who confidently states positions that are stupid and incorrect will garner the support of ignorant and stupid people - just look at politics. It helps you gain the support of people who can't tell either way whether you're right or wrong, so they just follow whoever seems to have higher social status or is nicer to look at. But it usually leads to disastrous decisions which the short nerdy person in the corner of the room was trying to tell everyone about, but nobody would listen, because they were small and unattractive and didn't have a big booming voice.


How are baseball scouts compensated?

Are there compensation schemes that could reward good analysis?


I don’t think so because there aren’t good objective measures outside of baseball.


They are typically compensated with money.


Well hiring for software engineers should involve fields outside their own while de-emphasizing leetcode. For example, there are psychometric studies comparing personality and job success.

Some of JP's lectures talk on this point quite a bit.

Workplace Performance, Politics & Faulty Myers-Briggs: https://youtu.be/GXHj7eZ23gk

IQ and The Job Market: https://www.youtube.com/watch?v=fjs2gPa5sD0


Regarding tall people in the workplace:

https://dilbert.com/strip/1992-08-27


There is actual peer-reviewed research about what makes a good software engineer which you can find easily with the right tools. This 2015 paper using a sample set of 59 Microsoft engineers across different divisions is one such study: https://faculty.washington.edu/ajko/papers/Li2015GreatEngine...

My experience in the industry is that no one in charge of either hiring or promotion is aware of the research, and have chosen to structure the systems that support them without data and prone to biases. The most bizarre aspect of our industry is that there is a high emphasis on work output, whether that be measured in LOC, number of PRs, number of releases, number of "story points," etc. or another measure, and yet low value of those same metrics in promotion processes.

I don't actually think those are great metrics to base career advancement on, but the contradiction is egregious.


Practical science in software engineering is generally disappointing, and this article fits that mold.

In their own words,

> The lack of specificity in our understanding ... Our understanding also lacks breadth: ... We took a first step in addressing these gaps

This doesn't represent established science, it represents a probative study. Moreover:

> Of the 152 engineers we contacted, we interviewed 59

That's pretty bad from a perspective of statistical strength, they size of the response bias is larger than the resulting sample itself. The sample is also quite small. In these days where the replication is well known, one should take the concrete results with a grain of salt. It's a good study, but much more is necessary to be actionable advice.


I don't know why it's taken for granted that the face of a potential pro ball player doesn't matter at all. Obviously no one thinks a strong looking nose helps one throw the ball farther. Could be that franchises are looking for popular heroes that can sell jerseys.


There are other considerations. Didn't Barry Bonds take so many steroids that he dramatically changed his own facial structure?


> Basically everyone wants to hire talented folks. But even in baseball, where returns to hiring talent are obvious and high and which is the most easily quantified major U.S. sport, people made fairly obvious blunders for a century due to relying on incorrectly honed gut feelings that relied heavily on unconscious as well as conscious biases.

Brilliant food for thought! Thank you


I wonder if remote work will start to correct for the height bias at least.


>Deciding who to "hire" for a baseball team was a high stakes decision with many millions of dollars (in 2022 dollars) on the line

This is similar to Football ( Soccer ), Man Utd £80 million Signing.


Every time this comes up, I can only think about how Irken society in Invader Zim was supposed to be absurd, but human nature looked at absurdity and decided to make it a parody.


> Irken society in Invader Zim was supposed to be absurd

Noo, our snacks!

But srsly, took me a minute to realize you were referring to the Almighty Tallest.


I'm white and above the usual height limit set by ladies on dating apps. Despite this with history of grunt dev roles and no managerial roles. TIL I'm the baddie.


There are so few executives that not even the most arbitrary and idiosyncratic rules about who gets to be an executive would prove to be a windwall for anyone you know. There are about 6,000 exchange-listed corporations on NYSE + NASDAQ. If the government passed a law that every single listed company had to have a Cherokee Indian as CEO, the vast majority of the 400,000 or so Cherokees alive today would not win anything.


You're not the baddie.

The baddie is preconceived notions based on genetic markers.

and anyway it's possible to have a +3 respect marker and still be under respected by being black, female, wearing ill-fitting/unfashionable clothes or displaying poor hygiene.


Same here... maybe there's some confirmation bias in these folk's observations? I see an awful lot of non-white, non-male, non-tall people in management positions whenever I look.


I'm sorry if this is off-topic, but why do people make websites with 250+ characters per line? Like, I can copy the link into Safari and press the "Reader View" button there to make it actually readable but I'm more wondering that if someone puts so much effort into writing content why don't they take the ~1 hour for some very basic CSS to get the default experience to be enjoyable?


Literally the reason why he won't fix it is because typographers were condescending to him: https://twitter.com/danluu/status/1228042394727149568


Ok, well I'll talk to him personally if he'll listen. I have eye issues and even if I bump the size up 300% it's still really hard to read. I don't know how else to say this but he's inadvertantly destroying the experience of his website for lots of people. Many of whom wouldn't even consider themself as disabled. I think if he could sit next to me and see how I suffer he wouldn't need scientific studies any more than he'd need a scientific study about how cows feel when they get hit by a car. They moo their disappointment about the fact that their ribcare is broken.


That seems like an incorrect reading of the thread. Seems more like ‘I like long lines and found little evidence that they are bad.’ I think I believe that he’s read roughly all the studies and that they are unconvincing. That doesn’t mean they’re aren’t people for whom the formatting is problematic. I think it’s also a bit of countersignalling about the nature of the blog.

There are also easy workarounds like making your window narrower or using your web browsers built-in ‘reader mode’ or opening up the inspector and adding a single style to the body element to set a max-width, so I’ve never really understood this problem.


> I’ve never really understood this problem.

Oh no way. Yet here you are, despite having the option to say nothing at all about the problem you don't understand.

You absolutely do not need to go to bat for someone who requires convincing from studies about obvious things. What a weird and sad way to live this, your one and only life. Hope he sees this.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: