Hacker News new | past | comments | ask | show | jobs | submit login
Good at programming competitions does not equal good on the job [video] (catonmat.net)
372 points by jpn on July 7, 2017 | hide | past | favorite | 166 comments



I'm not able to watch the video at my current computer

but its actually really typical that when something becomes an advantage for being selected to a certain pool

the success of the those in the pool after the selection will negatively correlate with that thing

---------

the really obvious example of this is the hockey birthday thing from Malcolm Gladwell's Outliers

people with the earlier birthdays were more likely to make it past each selection stage in becoming an NHL player

but those with the later birthdays who were able to be selected in spite of their later birthdays, were typically more successful after the selection

---------

The authors contend that the strategy might actually work against a team's success because they found that players born later in the year and drafted later actually had more productive hockey careers.

Deaner said the study showed that men drafted in the second half of the year were about twice as likely to have successful careers in the NHL ??? reaching benchmarks like 400 games played or 200 points scored ??? than those born earlier in the year.

"If the team wasn't making this mistake, they probably would have been more successful," he said. "The guys born in the first part of the year are much more likely to be busts."

https://www.nhl.com/news/study-suggests-nhl-has-bias-in-favo...


I favour the simpler explanation: Campbell's law. [0] If programming contests are favoured, then people will optimize for them. Simplify/generalize even further and you get Goodhart's law. [1]

[0] https://en.wikipedia.org/wiki/Campbell%27s_law

[1] https://en.wikipedia.org/wiki/Goodhart%27s_law


> the success of the those in the pool after the selection will negatively correlate with that thing

Could be relevant: https://en.wikipedia.org/wiki/Berkson%27s_paradox


Yeah, exactly


That's an excellent point! I can think of another example: In the movie "Hidden Figures" the black, female engineers at NASA are much better than their white, male counterparts simply because it was harder for them to get in. Perhaps the opposite is true today of engineering students at colleges with affirmative action?


Alumni preference is at least as big or bigger a factor in admissions than affirmative action. Many of the schools were segregated not so long ago, so you can guess how that affects the alumni pool.


It's pretty common for tech conferences to give preference for talks to tech minorities.

Whenever I hear a conference has done this, I subconsciously deprioritize attending a talk given by a minority, since they got in with a lower bar...

Terrible I know, but I'm not sure it is illogical.


This suggests that you think there is no perception bias that affirmative action is trying to counteract. You might be interested to know that when people try to correct for bias, they typically undercorrect.

Also, for people coming from more challenging circumstances, getting the same results is literally more impressive.


I am familiar with Outliers. I read the book. But I fail to see how this is relevant in any way.


He probably meant that people who do programming competitions are more likely to pass Google interview (they'll be better at doing algorithmic questions quickly) but are not necessarily better suited to do the actual job.

Just like hockey players born earlier in the year were more likely to be drafted.


To expand on this:

* Prior to officially selecting candidates based on performance in problems derived from programming competitions, candidates who excelled at programming competitions were likely to do well on the job.

* That correlation was observed on a wide scale by employers, so many companies -- Google chief among them -- started incorporating such questions into the official interviews.

* Candidates now observed the change in employer interviewing methods on a wide scale and adapted their preparation methods. This fundamentally changed the pool of people good at programming competition problems in such a way as to reduce the correlation between the original signal (good at algorithmic problems) and the goal (good at the job).

* Overall, widespread acknowledgment -- and all consequent changes in behavior -- of the original correlation between the signal and the goal significantly reduced the quality of the correlation.


I think it's simpler than this. I don't think there is much outright gaming of the signal.

Programming interviews and programming competitions are very similar, much more similar than programming interviews and real world software engineering. When you are selecting top programming competition competitors, you are implicitly selecting people who will absolutely smash your (non-design) programming interview questions. This has little to do with their effectiveness as software engineers.


In other words, what gets measured gets managed.

So if you measure hiring candidates by their performance in programming competitions, everyone will manage their own skills towards doing better in competitions.


Yes but not just that. I think the bigger trend is that training oneself to excel at those types of problems meant (in addition to other things) one thing about you back then (that you were really into programming). Now that same behavior likely means that you want to get a nice job at one of the big tech companies, as a result of them publically selecting for that. These are fuzzy indicators to begin with, but they're definitely different fuzz.


> I think the bigger trend is that training oneself to excel at those types of problems meant (in addition to other things) one thing about you back then (that you were really into programming). Now that same behavior likely means that you want to get a nice job at one of the big tech companies, as a result of them publically selecting for that.

Thanks for the explanation! The idea being discussed didn't quite click for me until I read this


Google's data shows that their interview process produces a low number of false positives, at the risk of producing a high number of false negatives.

That is, despite passing on otherwise talented people, those people who successfully pass Google interviews go on to be successful at Google.

edit to add:

Couldn't find that original article, this article goes on to speak about success predictors: https://www.wired.com/2015/04/hire-like-google/


Was it a proper double-blind study substituting interview result for rnd() to make a hiring decision? And established correlation a few years down the line?

Because what you've stated: 'data shows' ... "successfully pass go on to be successful" - sounds like cargo cult science or pseudoscience to me.


Do you really need to do the double-blind study, though? The median person cannot program at all, so using rnd() couldn't possibly have better than a 50% success rate (certainly lower; 50% is just a round number that I'm sure is larger than the number of programmers). Is Google doing worse than that? I kinda doubt it.


Yes. You can do it at each stage as well. And we are not talking about hiring a general employee but a CS or programmer. So employ rnd() after a resume screen. Google has already found that GPA and brain teasers had no positive effect. This is similar and it would be an interesting experiment.


Google generally isn't known for pseudoscience.


You've stated 'data shows'. So my question was - was it a proper double blind study?

Because it is definitely possible to do it properly. Substitute the results (or partial results) of the interview with rnd(), use it for hiring decision for a subset of candidates. Keep this information confidential. Establish if parts of your interview process don't perform better than randomness a few years down the line.

It's possible to do. Only I don't think this was done. And if it was not done, and the method was some 'data shows' with hand-waving - it would be under definition of pseudoscience.


You're calling most scientific journals psuedoscience -- even the hard sciences, like astrophysics. Nobody's doing controlled studies of supernovas, they're drawing inferences from observational data. I don't think your definition of science is sustainable.


Even Google has a PR department dumbing things down.


Definetely. I arrived at the final stage in a Google interview which I failed and I can confirm that out of the 5 interviews three were based on puzzles that I later discovered are found in books for coding competions. On two I did a good job to work out a solution myself but I completely got the third wrong. Who prepares for this kind of competitions has a huge advantage in these kind of interviews.


That's how I felt trying to find a job in the bay area. If I wanted to compete with the top talent then I needed to prepare for the interviews and practice those problems.


Your interpretation is too weak. It is not just "not necessarily better", it is the stronger "on average are probably worse".

The reason is that programming competitions give more of a boost to your odds of being selected than to how well you'll do on the job. So people who otherwise wouldn't have gotten in now will, and will not perform as well as the people that they displaced.

Which is what happened with hockey players. Being born at the right time of year put you in a bucket with people who were slightly younger than you. Which improved your performance on the tests, but didn't matter once you all grew up. So slightly worse people at the right time of year displaced slightly better at the wrong time of year, and the average came out that people who got through and were born in the latter half of the year were actually better.


The hypothesis is that the older kids are bigger stronger but the younger kids are more skilled. And apparently this holds up all the way to the NHL where the skill edge overcomes age.


It's sort of like Simpson's paradox. It's relevant because the statistical "paradox" (contradiction of intuition) described in Outliers is similar to the one in the video.


I'm also a former ICPC world finalist and I'm willing to believe this is the case in a BigCo because: 1) Top ICPC competitors tend to be extremely socially awkward, and may not work well in groups. 2) Vast majority of engineering work is very algorithmically simple, thus these folks don't get a chance to "shine".

However, I've noticed that they are popular in prop trading firms, where work tends to be in very small teams or individual. I don't know how their performance correlates to fund performance.

If I were hiring, I'd still prefer to hire at least some top ICPC performers. The hard algorithms are rare---but can make or break your product.

I also think the knowledge learned from programming contests is invaluable. I'd like to be able to discuss bipartite matching or min-cut with my colleagues without eliciting a blank look.


I agree for the most part, but I think it really depends on the product your team is building. I can think of at least a couple companies I've worked at where the hard algorithms weren't rare, they were actually non-existent. The product's success relied on user experience, a solid design, and exhaustively tested and maintained code.

The reality with many companies these days is that the algorithmically hard problems are solved in the frameworks and libraries they use, it's simply not necessary for most engineers to understand their inner workings.

I'm curious, would you prefer an engineer with solid software engineering and design knowledge, or one with minimal experience building real software, but very in-depth algorithm and CS theory knowledge?


> I agree for the most part, but I think it really depends on the product your team is building.

I completely agree. However, 1) Google---whose CTO is the originator of the claim in discussion---definitely works on hard algorithms and 2) you never know how your product could grow. A feature which could get dismissed as "impossible" might be implementable by the right talent. Presumably these teams don't need to be full of top algorithm folks.

> I'm curious, would you prefer an engineer with solid software engineering and design knowledge, or one with minimal experience building real software, but very in-depth algorithm and CS theory knowledge?

Depends on the size of my team and what I'm trying to deliver, I suppose. In general I'd aim for a balance in the team, maybe 3/4 engineering/design + 1/4 algorithms. I feel like it's easier to learn design patterns than how to use algorithms creatively.


> 1) Google [..] definitely works on hard algorithms

True, but Google also has over 20,000 engineers working on various things. I think only a tiny minority of those are actually working on hard algorithms.


It's like only hiring NASCAR drivers to drive taxis because "they might need to go fast".


Indeed, for Taxi drivers its better if they go slower and entertain the guests along the way.

I have had top algorithm people work for me and they can do make magic happen. But they tend to have the quirky personalities of Wizards. I once had a amazing code ask me if he could so some side work so that he could pay off his credit cards, sure no problem. Next week he came back and told me how he had bought a new 10k telescope....


Exactly. They don't just hire programming contest folk. Most of the people they hire are those with engineering experience, etc.


I used to spend some time on TopCoder and got to be fairly good but not at the top "red" level... There is some transfer to improved code quality but I think it's generally not something that made a huge difference to what I do as a developer.

There are plenty of people who can do really hard algorithms but may need more time or research. Real life doesn't always present you with the same canned problems that are used in competition and you're not operating under the same constraint. A competition problem typically starts from an algorithm, somewhat like a comp-sci exam question. As a competitor you need to quickly recognize the algorithm and then quickly write an implementation. I'm sure a lot of the best competitive people would draw a blank when presented with a problem that isn't a well known algorithm and certainly wouldn't be able to solve it within the time constraints.

Basically it's a game.

What I'd say is that outstanding performance in programming competition probably correlated to some degree with intelligence. Intelligence correlates to some degree with being a good software developer.


It depends on what position are you trying to fill: if your business have various hard and general algorithmic problems that you need someone to solve then, hiring an ICPC regional champion would probably be a good thing. But this is not always the case...

I wouldn't hire an Olympic medalist runner to do pizza delivery because: 1) the person will probably get really bored (and may quit or perform bad after some time), and 2) the delivered pizzas will probably be a mess inside the box.

I believe programming competitions would do itself a favor changing its currently ambiguous name to algorithmic competitions. Then engineers, who are also programmers, would be ok with it and we would stop having this kind of threads every few weeks.

That being said, I personally enjoy competitive programming and I do agree that the knowledge you acquire is invaluable :) and I also think that most engineers should practice with online judges now and then to be better at their jobs.


> I wouldn't hire an Olympic medalist runner to do pizza delivery because: 1) the person will probably get really bored (and may quit or perform bad after some time)

I think it's a bit more complicated than that. What if Olympic medalists were conditioned to think that delivering pizza is the coolest job that exists? What if their scooters have massaging saddles and 5.1 hifis?

I definitely agree this market (segment) has, in a way, a very misled attitude/approach, but definitely, I think there is an (evil) art in marketing such workplaces.


> Vast majority of engineering work is very algorithmically simple

anecdotal_evidence+=1 : Am engineer, only algorithmically interesting task I had the last three months was to quickly find all minimal solutions of a given instance of a certain class of constraint graph problems. That's about average, at my job. Four algorithmically interesting things a year. And I guess I might even be one of the luckier engineers.


I just match APIs up 60% of my time. 20% in 75% useless meetings. And another 20% yak shaving.


While these problems don't come up frequently, I've found that in a typical project anything involving a graph has a good chance of being implemented wrong.


To add on point 2), most big companies have a strong bias against 'smart' solutions. If something is too complex to be fully understood by half the engineers:

a) it won't be trusted

b) it won't be maintainable by any random staff, which puts an additional risk on choosing that solution.

The maintainability argument will usually be enough to can any idea that only few people are comfortable to make evolve.


The interview process would indicate the opposite.


To be honest I think a lot of company representatives (CTO, hiring managers...) don't acurately understand or voice what they really need.

A lot of them say to want 'A players', but the organization is not ready to value people willing to rock the boat, deeply challenge assumptions or willing to put higher people under scrutiny over their decisions.

Same with hiring super smart people but not wanting to commit to risky bets or to fail often.

In these companies the most demanding people come and go, and more lenient people ('B players' ?) stay to work within the system or try to make things better more incrementally.


Aren't contests normally under huge time pressure? I would think that a lot of people could solve even quite hard problems given ordinary conditions like having lots of time, colleagues, and reference materials available.


At the same time, if you've never seen a solution that uses something like DP, you're not going to see a DP problem and think "oh, I can use DP for this". Part of it is ensuring that programmers have awareness of the tools at their disposal, so they can pull them out when they're under a deadline to get something done on time that's performant


Companies don't have 2 hour time limits on solving their hard problems - if you run into hard algorithmic problems in your problem domain, you should just have someone with a graduate degree in CS theory. Let them work out a solution that's actually correct and they can explain why, and then the engineers can implement said algorithm.


I am also an ACM ICPC finalist (our team was 2nd in the world, 1st in North America in my final year of eligibility).

I think there are a few very specific skills you learn in programming contests that are useful in the real world:

1) Once you understand the problem to be solved, you can simply sit down and write the code to solve it. The ability to turn ideas into code in minutes instead of hours or days is invaluable. It lets you try lots of ideas, because the implementation step is easy. You can pick the ones that work and throw the rest away, and you don't mind throwing them away because they weren't a big investment. You don't realize how useful this is until you start meeting people who can't do it.

Our coach always told us that there is at least one easy problem, and someone's job the moment the contest starts is to identify it and bang it out. If we hadn't solved one problem in the first 20 minutes, we were already losing.

2) You learn to write code that is simple and correct the first time. Idioms that avoid special cases. Simple data structures ("No pointers!" was common advice, though of course reality was more subtle than that). Understanding when big-O complexity really matters, and when you can use a sub-optimal solution that's easier to code and reason about.

Sure, someone who is smart can write crazy complex code that only they can understand. But crazy complex code doesn't win contests. If you make a mistake you will burn all of your time trying to figure out what you did wrong, and the worse the code is, the worse of a hole you will dig yourself into. Write simple code that doesn't leave much opportunity for mistakes.

There's a scene in Jack Reacher where Tom Cruise's character says, "James Barr is a sniper. He's not the best. He's not the worst. But he trained non-stop for two years. What does training like that do? What does any training do? Skills become reflex, muscle memory, you do without thinking. It also makes people who aren't necessarily smart seem smart by beating some tactical awareness into them." That's what contest programming does.

3) You learn to identify your limits. There's usually (but not always) at least one problem you should just throw away as "too hard" for the time allotted. You won't solve it, so any time invested in it is a waste. In the real world, where you can't just decide not to do your job, that means you know the difference between when you can go fast and when you need to slow down and reflect. When you need to write a test to really know if the code you just wrote is correct. When it's better to step back and simplify the problem, rather than beat your head against it.

Yes, people like to focus on the "hard algorithms", but really this is not what makes good contest programmers. People aren't deriving Ford-Fulkerson on the fly during a contest. They memorize a set of canned algorithms and apply them when necessary. You learn to recognize and value simple, robust solutions to problems. You carry a toolbox of these wherever you go. This toolbox keeps growing long after you stop competing, and it contains a lot more than fancy algorithms.

Now, there is also plenty you don't learn doing contest programming that you still need to know in a real job. But these are the things I learned that have helped me.

I'm still waiting for the day I need to compute the coefficients of a rook polynomial, though.


I would disagree that they'd be better at solving hard algorithms as well. The hard algorithms are being solved at the top theory groups at the top 10 schools.

I did a bit of programming contests in school as well and most kids practiced hard and got good at the tricks and patterns of solving those problems. The algorithms were mostly undergrad or grad level at best. They were not breaking new ground.

As to whether top ICPC talent correlates to IQ and capability, Most likely, although the same can be said about a really high GPA though. People like Tomek Czajka will shine in all areas.

I've worked at a couple of the Big-4 companies before and the best programmers I met there were not competitive programmers.


But as much as you need a good algorithm, you need it to be testable, reliable and predictable

This involves some team work


This is probably true. I agree. I would aim for balanced teams, some top ICPC performers balanced with people who are better at solid design, creating maintainable codebase and working with business people to define requirements.


Just like 'There is no such person as Good CEO', but only 'A Good CEO for a particular company, at a particular point of time', there is no such a person as 'Good Programmer', there is only 'A Good Programmer for a particular job, at a particular point of time'.

Context matters a lot. One does not use an AK-47 to kill a mosquito. It is terrible for that job. But that does not make AK-47 a terrible weapon.

Good at programming competitions does not equal good at any programming job, is a more appropriate sentence.


I think that there are personal traits that make people perform good in most situations, but perhaps not all extremes. As continuity is normally good, you try to find those, and change only if you really have to.

A programming competition evaluates almost none of the traits thats doing a marathon run as a highly preforming team.


Have you tried killing mosquitos on the back of a deer[1] standing behind a tree at 400yd. Even the AK with it's heavy cartridge isn't enough. You want a full power rifle, especially if the tree is a hardwood. /s


In the chapter of his interview in Coders at Work[0], Norvig found that the strongest correlate in the interview process with success at Google was paradoxically to have been given the lowest possible score by one of the interviewers. He surmised that this was because in order for such a person to have even gotten hired at the end of the process, someone trustworthy must have seen so much potential in the prospective hire that he strongly advocated for the person to be offered a position which worked in spite of that other low rating.

Kind of ironic for a company whose product values are so tightly tied to quantitive data.

[0]http://www.apress.com/us/book/9781430219484


A qualitative assessment of that factor is hard because Google doesn't hire the people that it doesn't hire.

Another assessment could be that a divergence of opinion among interviewers is itself a positive sign -- programmers with strong controversial opinions who are willing to hold to them even in an interview setting might be better programmers for that.

A less sanguine assessment is that "success at Google" correlates with people who generate controversy around themselves, simply because that is something that creates visibility.


More recently, people who have and are willing to analyze data across many interviewees and interviewers at multiple companies (so they can see ongoing performance of a candidate a company passed on!) have pointed out:

http://blog.interviewing.io/you-cant-fix-diversity-in-tech-w...

After looking at thousands of interviews on the platform, we’ve discovered something alarming: interviewee performance from interview to interview varied quite a bit, even for people with a high average performance ... roughly 25% of interviewees are consistent in their performance, but the rest are all over the place. And over a third of people with a high mean (>=3) technical performance bombed at least one interview.


This isn't surprising - in fact, it would be surprising if it wasn't the case. This fact says nothing about the ability of Google's hiring bar to distinguish between good and bad hires, except that it is not a perfect signal.


Totally agree. It just serves as a useful reminder though not to fall prey to the fallacy of deliberately chasing the correlational measure as a matter of policy.


I once had an online discussion about the difference between competitive programmers and professional programmers, and which were better. Someone argued that competitive programmers were better, because they had to perform in more extreme circumstances. As he put it: they were sent into the forest with a knife to kill a lion.

Everybody else ran with that metaphor. Someone asked what a lion was doing in a forest; don't they live on the savannah? I asked whether he was sure there was a lion; plenty of times I've been sent to kill a lion and ended up having to kill a goat or an elephant instead. Are we even sure it needs killing?

I think that's the difference between a competitive programmer and a professional programmer. The competitive programmer will be much faster with a solution to the given problem, but the professional programmer will solve a better problem.


Mountain lions like forests just fine! Unfortunately for them, we chopped most of the forests down.

Assuming you're in the US and somebody tells you there's a lion around, pretty good odds it's not one of the big orange ones.

EDIT: Bay area published yesterday http://www.nbcbayarea.com/news/local/Several-Mountain-Lion-S...


A mountain lion is a very different animal from a lion. Different subfamily and genus, even. I don't live in the US, but when someone tells me "lion", I tend to assume they mean a lion, and not a puma. Though if there's any doubt, that's absolutely something to ask questions about.


That conclusion seems just as arbitrary as the one you're contesting.


I have had a bad experience with an algorthmic topcoder.

The guy has a brilliant mind. But he doesn't understand the big picture or why his code would fail when integrated with big codebases.

He thinks his work is "done" when he has written his tiny little code with some queues and trees. Then he leaves it for someone else to integrate and thus really solve the problem.

The best engineers I found were the ones who took ownership of their projects and had the ethic to dig deep. Not the ones who could solve toy algo problems.


Here's a list of companies that don't do this https://github.com/poteto/hiring-without-whiteboards I hope it's the beginning of an industry wide trend away from these types of interviews.


Look at programming competitions from an assessment perspective: what are they measuring and is what's being measured good or bad for a job?

Ability to work under short time constraints (probably good), to hack out some solution that will work temporarily (good) but probably not solid (bad), to forego much time to consider the implications of design and implementation choices (bad), to develop without communication if solo competition (bad) or without communication outside of the small group of core developers (bad), to build a solution without getting feedback and refinements from stakeholders (bad), and so on.


People who enter programming competitions are looking for some sort of glory: to be stars. When they don't get it from the job, they get bored.

I suspect that the people good at programming competitions could easily perform well on the job, if the motivation were there. I don't think it necessarily has anything to do with short-term versus long-term problem-solving focus.

There are plenty of short-term problems that you have to solve on the job to be effective. You're not doing them in competition against anyone such that if you procrastinate, you will lose, so the motivation vaporizes.

Also, since job interviews are like programming competitions, people who are good at programming competitions figure they can easily get a job anywhere, and to do that repeatedly. They are not motivated into working hard by job security concerns.


I would tend to agree, and while I was never very good at programming competitions, when I saw some of the winning code on Topcoder, I somehow lost interest. If that's the code I would write if I learned to be better at competing, then I should rather spend time learning something else.

On the other hand, something like Project Euler or even Advent of Code is very nice, if you do it at your own pace, or for learning a new language.


Although, I do think that competitive programmers can become great programmers more often than not.

They practice writing correct for-loops, branches, recursive functions, efficient graph searches, input-output, data structure uses etc. in a very fast pace. To be able to do that requires you to chunk a significant amount of information.

I found that doing stuff very fast correctly makes you learn and internalize concepts quickly and more deeply.

This is all unfortunately anecdotal, I'm not sure how to google for such research (not just in cp but in "learning to do stuff fast improves learning rate").

Yet every instrument I've played and tried to learn, the moment I tried to play stuff fast but correctly (be it drums or piano), I had to improve my technique, had to internalize the rhythm patterns (more complex rhythms are insanely difficult to do fast), memory etc. With it came a significant amount of progress.

Same thing happened to me with language. I was speaking English for 20 years but still had trouble with fluent pronunciation (despite my writing feeling natural), I knew what I had to say but somehow my tongue got all tangled. Yet, when I tried learning some rap songs that use insanely fast diction, my speaking improved up to a point where it felt normal.


I haven't done a code competition before but my understanding is that code competitions reward cowboy coding over good engineering practice, i.e. implementing features while paying no heed to maintainability.

When you have that mind set and start dealing with a codebase as monstrous as Google's it sounds like a recipe for some serious technical debt.


It continues to surprise and frustrate me that as an industry we continue to highly prize proxy signals for engineering skill, when engineering skill is so directly measurable.

Why even bother with measuring things that are N-degrees removed from actual engineering, when you can just get people to engineer things?

I know others on HN have been hammering this point home for years, but until something changes it deserves to be repeated ad nauseum: work samples work samples work samples work samples.

Stop with the trivia questions. Stop with the contrived algorithms questions. Ask people to design systems, ask people to defend their designs, ask people to write real runnable code that directly relates to the work they will be doing at your job, ask people to review a real piece of code written at your company, ask people to critique real design produced at your company. Anything but what we're doing right now.


It's not that simple.

Many good engineers, especially those who aren't young men, don't have existing work samples they are able to share. Perhaps, by choice or otherwise, they have a personal life that does not allow them to write code for free. And their employers won't allow them to share work code.

And many good engineers are not willing to spend an unpaid weekend writing code auditioning for a job they aren't yet sure they want.

And, as you know if you've ever worked with a disruptive colleague, or had customers, engineering in a team for a product is much more than just writing code.


That's not what a work sample is.

A work sample is a (usually, hopefully) standardized piece of work you request from all candidates that mirrors the actual work they'd be doing on the job.

There's no reason a work sample needs to be so onerous that it costs you a weekend. What people forget when this comes up for discussion is that interviews are work. In fact, they are themselves onerous work, since they require you to be on-site and intensely engaged in ways you don't have to be to do the actual work of a job.

A work sample, on the other hand, can be done from your home, with a beer next to your laptop if that's how you roll.

The notion of audition work has definitely been abused in our industry. People get work samples that aren't standardized. They get work samples that are later ignored. Work samples don't have objective scoring rubrics. Some annoying companies assign new features for their products as "work samples".

Done carefully, though, with objective and predictable grading and calibrated to offset in-person interviewing time, they're superior to any process we have.


As an applicant, I'd be all for going the work sample route, but only if it's paid for and if it were used in lieu of the typical face-to-face hazing session rather than in addition. I notice more and more companies are asking for work samples (or "homework assignments" or whatever they want to call them), but they just pile it onto their already onerous process.

The major pain point in interviewing (at least for me) is the massive time sink for a very small chance of success. I'm spending a few hours on the application, then a few more doing phone screens, then a weekend doing a work sample, then I blow a VACATION day (which has monetary value and of which I have only 10 per year), and at the end of it all who knows what my odds even are? Multiply that by, lets say 10 companies per year, and: I've invested 31 days of my life, have no vacation left, and I still may end up with nothing.


I think paying for interviews is problematic for a bunch of reasons.

On the other hand, I agree wholeheartedly with people who reject "homework assignments" layered on top of a standard interview process.

The onus is on people who want to take advantage of work samples to:

* Ensure that the time they're asking of candidates is offset by lowered time demands elsewhere

* Ensure that they're using the work samples objectively, so that people aren't asked to do coding work as part of a crap-shoot application process.

The process we used at Matasano and NCC was less demanding than typical job interview processes. The challenges were simple and self-contained, and when they were completed we could tell you with a decent degree of confidence whether you were likely to be offered a job at your (shortened) on-site interview.


> I still may end up with nothing

What do you end up with if you filter out companies that ask for work samples?


1 additional free weekend for every one I don't do.


I'd definitely say that if a company is handing out work product assignments that take an entire weekend, it's somewhere you don't want to work.


As I said, many good engineers are not willing to spend unpaid time auditioning for a job they aren't yet sure they want, doing work they can't reuse, by taking a formal standardized test.

It may in fact be a better hiring practice in terms of its ability to predict job performance. But you will _lose good candidates_.

And in particular, you will lose experienced candidates. You are embedding biases toward younger (and therefore less experienced) male coders in your hiring practices if you require what is, from their point of view, free work. (You can see the biases in your "with a beer next to your laptop" remark, for example.)


I'm an experienced engineer (10+ years) and I would favor this process way more than the current algorithm/DS interview. It's the algorithm/DS interview that really favors young graduates. They already have an advantage because college knowledge is still fresh in their minds. And they often have the free time to grind leetcode for months. Experienced engineers often have families or other priorities and free time is hard to find.


You updated your comment after I replied to it, and so I updated mine.

Source: I ran a work-sample recruiting process for the largest software security company in the US (after it acquired my startup, which was one of the top 5 in the US), and our process most definitely was not biased to younger workers. Work samples drastically improved diversity.

Edit

You've edited your own comment again, which makes you very hard to respond to. If you want to rebut anything I've written so far, can I ask you to copy it into a reply to this comment?


Sorry, new commenter, unfamiliar with the etiquette of the edit button. I'm done.


I agree with you somewhat, but couldn't you just as easily replace "unpaid weekend writing code" with:

- unpaid weekend updating resume

- unpaid weekend practicing algos

- unpaid weekend browsing job sites

just as easily? I don't think asking job seekers to be interested enough to give a work sample is an undue burden. If that filters out people playing the numbers by applying to every opening...is that such a bad thing?


If I spend a weekend updating my resume, practicing algos, or browsing job listings, then that time is useful for all jobs I'm applying to and interested in.

If I spend my weekend on a work sample assignment, it's only applicable for that one company which might very easily not give me a compelling offer or be somewhere I'd want to work.

At this point, I'll only do work assignments if I'm already very familiar with the company and have reason to suspect they'll give me a compelling offer.


It's a problem of scale. Spend half an afternoon skimming through CtCI (shouldn't need much more as a seasoned developer) and you're good for N number of job interviews. I'll take that rather than spend 4*N hours on take-home problems in addition to time spent on-site.


It depends on how far along the interview process you've gotten to with a potential employer and if this is a step away from being hired.

If the potential employer is asking for code samples up front, I think that's a bad sign. Let us talk about the work environment, expectations of the position, and then we can focus on my tech skills.


Usually the work sample is something simple that doesn't take longer than a day or two. Many interviews that don't do work samples can go on longer than that and accomplish nothing. I was hired after doing a work sample that took a day. I was still paid for my work, irregardless of being hired. The work sample wasn't "standardized", just a typical task one of the other developers would have done otherwise. So I would encourage any employers reading to try this process out--it's better for both parties.


>You are embedding biases toward younger (and therefore less experienced) male coders

Younger maybe, but where does the male part come from?


Less likely to have personal obligations to others, e.g. family. Less likely to feel social pressure to do time-consuming emotional labor outside of work.


How is the standard interview process, which often requires candidates to submit to multiple rounds of in-person interviews, sometimes even requiring travel, better than a process that people can do mostly from their homes?


Honest question:

I'm the tech lead for a startup with just a handful of programmers. I'm interested in hiring someone who can help us come up with well-architected, novel solutions to the problems that we're already solving. The issue is, we're solving them in a way that turns out to be pretty difficult to maintain and change. I'd like a totally new, fresh approach that takes the lessons we've learned and creates something that's much easier to write fixes for, much easier to extend with new features.

I need someone who doesn't just build on top of existing, well-defined codebases/APIs. I need someone who has the ability to engineer a complete, well-rounded, extensible codebase/API himself.

It would seem to be remarkably difficult to test for this ability. Even work samples that can be done in the short time period you describe wouldn't be able to test for this ability.


Short work sample tests don't need to be simple work sample tests. One of Matasano's work samples required candidates to reverse-engineer a binary protocol, build their own client for it, and then use that client to find vulnerabilities. If you had no aptitude or experience with that problem set, finishing it would take a very long time.†

Ultimately the problem you're really describing is the challenge of hiring people you don't really know how to hire. If you know very well how the role you're hiring for is supposed to function, you should be able to generate challenges within whatever constraints your candidates have. If you don't know very well what the capabilities of that role need to be, no interview process is going to work reliably for you.

(we didn't care, by the way: if you did well with that work sample, it mattered very little to us whether you were prepared when you started it).


"build on top of existing, well-defined codebases/APIs." is not necessarily easier than "engineer a complete, well-rounded, extensible codebase/API himself."

I think these are two different skills. Some programmers are better in building on top of existing codebases, some programmers are better in build things from scratch.

maintaining vs starting.


I don't mean to propose that people spend their off hours writing work samples - that's unreasonable for all the reasons you've given and more.

What I mean to propose is that we dramatically alter the current on-site interview process used by most companies.

Instead of 5-7 interviews, consisting of 5-7 independent but ultimately equally contrived algorithms questions on a whiteboard, use the several hours you have the candidate to produce real, working code that reflects the work that your company does.

Which is to say, this requires no more time commitment than the existing interview processes at typical tech companies.

If the work your company does is deeply algorithmic, this will be reflected in the work sample produced via this process. If the work your company does is more heavily UX-oriented, this will be reflected in the work sample produced, also.

Instead of "over the course of a full-day on-site round of interviews, candidate produced several short snippets of hand-scrawled code solving various CS textbook problems, which we will now use to infer general engineering ability"...

... you can say "over the course of a full-day on-site round of interviews, candidate designed and produced a module of code that does [X small thing that company does], which can be assessed with the same rigor and methodology we already apply to our existing work and employees"


While I agree that a focus on "work samples" is inappropriate, I must emphatically agree with the parent's broader point.

The way companies in this industry conduct interviews and measure performance is like an aerospace engineering company testing candidates for satellite engineering on their understanding of the Standard Model (physics) or group theory or some such. There's too narrow a focus on what is required of an engineer in the software industry.


You are allowed to pay somebody for a work sample.


> don't have existing work samples they are able to share.

This, at least, should be fixable.

Don't work at places that won't let you talk about your work and demonstrate your competence.


I don't think that's always possible. What if you worked for Healthcare or Security or Finance, where there are strict laws for keeping code/design private?


What are examples of such laws? There are laws allowing prosecution if you break agreements, sure, but the agreements themselves are what can be fixed by encouraging the market that they're incorrect.



That seems targeted at .gov employees, which is a little different.


I actually like the current interviewing approach... and I feel like we are a silent majority. I think whiteboarding / coding algorithm and design problems is pretty fair and reflective of engineering skill.


How do you think that? I think myself a decently productive engineer and I always loathed the fact that I have to study for most of my interviews. Study things that I have almost never used in my actual engineering career. In a situation nothing like actual engineering work.

My favorite interview process (given by the company I currently work at) consisted of one coding prescreen that wasn't terribly difficult, one session where I simply talked with two engineers about past projects I worked on, and then one higher level architecture problem that involved pseudo-code but not some esoteric algorithms.

That interview process, I think, is way more representative of my actual day-to-day work than any algorithm interview I've done. It deemphasizes the ability to figure out / reguritate previously memorized tricky algorithms on the spot (when do you ever need to do that during the work day?) while emphasizing the ability to communicate how and why you make decisions.

Which is an incredibly important part of engineering, way more important than whatever skill whiteboard algorithm interviews test.


How is whiteboard coding reflective of engineer skill? At what time, during any job you've had, have you needed to instantly write code on a whiteboard, in front of peers, in a matter of minutes. Almost by definition, it's not reflective of engineering as actually practiced in a job.

I'm a far bigger fan of at-home coding exercises. Yes, I know, some people get annoyed at these because they think you're asking them to do free work in their spare time. But what other way is there to test people's coding under circumstances that most adequately mirror those of an actual job. E.g. they have a (relatively) unlimited amount of time, they have access to Google, their IDE, etc.

I'd much rather see what someone can come up with, in response to some novel problem, based on a couple of days programming, than what they can hack out on a whiteboard based on thirty seconds of thinking. The former seems closer to what coders are actually required to do, day on day.


> At what time, during any job you've had, have you needed to instantly write code on a whiteboard, in front of peers, in a matter of minutes.

Well, constantly, actually. Although I'm thinking of designs and snippets rather than actual functions. I guess it depends on what you're asking people to whiteboard during the interview. I agree that asking people to whiteboard qsort is silly, but walking through design alternatives with occasional code snippets to illustrate implementation options is a pretty basic skill.

> based on a couple of days programming

Either your company is very well-known and very attractive to candidates, or this is going to incredibly restrict your candidate pool.

I think smaller work samples are a great idea, I think code reviews are a great idea, but asking for two days sounds like a bit much, especially early in the process.


Although I'm thinking of designs and snippets rather than actual functions.

But that's the thing -- at whiteboard interviews, they don't ask you to produce "designs and snippets". They make you write actual working classes and functions.


> At what time, during any job you've had, have you needed to instantly write code on a whiteboard, in front of peers, in a matter of minutes.

Fairly regularly. Like when I'm explaining to someone how something works after they've asked me about it.


You regularly have to instantly write code for a problem from undergrad CS coursework on a whiteboard, and for which you haven't necessarily been involved in working on, in a constrained time with your job hanging in the balance?

I'm skeptical.


No. You just changed the question. Even most whiteboard interviews don't meet your newly narrowed criteria. I just meant to state that I regularly write code on a whiteboard in my real work. You could argue it's time constrained too. At least as much as it would be in an interview.


?? it's not something that's hard to do, it's like the foundation of my career. I use the principles all the time. my whiteboard /verbal / code solutions are based on that.


Using the principles all the time is different from regularly reimplementing CS textbook trivia under conditions in which your job is on the line. Where do you work, that you have to constantly re-invent the wheel over and over again under such constraints? I'd consider such a job to be hell. Not hard, just tedious and uninteresting to the point of pain.

I'd also not consider it to be an engineering job. What you describe seems to me to be more like a CAD technician job at an engineering firm.


You're the overwhelming majority, which is frustrating, because the method you're advocating for empirically and clearly does not work well.


according to some random people on the internet?


The current approach has a huge false negative rate. It keeps out the bad apples but hugely favors marking a lot of good/great apples bad as well. The time invested to study for these interviews is quite onerous in itself. So, I'll say I'm not a fan of the approach.

That being said we do use the method where I currently work to interview. However, the problems are not some made up situation or a test of data structures/algorithms but a set we've encountered in real life.

From what I've heard the whiteboard method isn't really popular with many companies today either but everyone is sticking to it until some other alternative which avoid false positives presents itself.


It is reflective of skills at internalizing and applying undergrad (and sometimes graduate) level academic trivia in a familiar context. The interview process you and other "silent majority" members think is effective at measuring engineering skill is not, actually.

I really wish you and so many of your colleagues would spend a few weeks with engineers in the physical science engineering disciplines. What we do in this industry is a farce, with respect to engineering.


I like it... because I'm good at it, so it's easy for me. However, I get no idea of what working at the company is like (beyond what can be gained from asking directly), and the company gains no information about my suitability for that particular job, since they're assessing me on exactly the same criteria as every other job.


What's ironic to me is that a lot of people are going to see this video and just start using programming contests as a negative proxy signal for engineering skill.


I am currently reading "hackers: heroes of the computer revolution" and it seems like this culture of optimizing algorithms ("bumming") comes from the extremely early days of programming, when people were laying the groundwork for all the basic functionality of CS.


Well it's not just that -- algorithms do matter, and a very small number of programmers do move the needle in this industry through their ability to shine in that area.

But the vast majority of the time -- that's not what your company needs at all. You need someone who's smart and reliable, and more to the point, believes in your mission. And who will go in and do all that far from algorithmic, highly unglamorous stuff that keeps your business from going underwater, day in, day out.

But of course - those skills are difficult to evaluate in a short amount of time. So instead companies go for a skill that can be "measured".

A skill like -- you guessed it -- algorithms.


according to who?? like there are people who post stuff like this all the time, but it's just the anecdotes of a small population. companies have tried many different techniques and this is currently working the best. no top notch engineer is going to do a 8 hour take home test for every interview


companies have tried many different techniques and this is currently working the best.

Is it, now? The best thing you can say about is probably "it seems to kind of, sort of work" -- at the cost of burning through a whole lot of candidates, and their presumably worthless time.

no top notch engineer is going to do a 8 hour take home test for every interview

Yup, take-homes suck, also. But that doesn't mean that complementOf(8-hour-take-home) is the right answer, either.


And with a work sample, how do you know:

1) how well they work with others?

2) how long it took them to come up with the solution?

3) if they actually came up with the solution themselves


1) how well they work with others?

Whiteboard sessions don't tell you that either. Only whether they got lucky, and you picked a problem they already know (so they can breeze through the solution in front of strangers with confidence). Or if they can at least suck it up, and pretend they enjoy this nonsense for 45 minutes (or 4-6 hours, depending how crazy your company is about this stuff) to get a job.

2) how long it took them to come up with the solution?

Does it matter? Really now. What matters on the job is did you go the last mile, think about the corner cases and pitfalls, and make sure your solution was cleanly coded and documented for the next guy or gal. Not whether you solved that silly HackerRank problem in 35 minutes as opposed to 45.

3) if they actually came up with the solution themselves

This one's pretty simple to sniff out, actually: "So I'd just be curious, why you did X here and not Y?" There's pretty much no way a bullshitter can answer that question.


Totally agree, and if someone does not have code they can share, they have no passion for programming. Therefore no job with me.


As a long-time interviewer, I've learned that a candidate being good at programming competitions means that they're probably good at programming competitions.

It's a weak signal either way for success or failure at interviewing and being able to do the job. Part of the problem, as we've discussed on HN so often recently, is that a programming interview has to waste time with FizzBuzz-style questions just to flag the candidates with great resumes, transcripts, and phone screenings who still actually can't program a computer to solve even a trivial problem.


who still actually can't program a computer to solve even a trivial problem.

Or who get nervous, and freeze up.


Here are some of the positive qualities I would expect from a competitive programmer compared to the general CS population early in their career:

- knowledgeable at algorithms and data structures

- good at analyzing correctness and edge cases, even on simple non-algorithmic problems (e.g. FizzBuzz)

- accustomed to working hard and learning new concepts. this attribute is not specific to competitive programmer – for example, I'd expect similarly from an open source contributor – but it's higher than for a typical college student.

Some negatives I'd expect, which are fixable over the course of the course of their career:

- over-confidence in code, under-testing

- less skilled in OOP, coding style, version control systems, as well as web development or systems code (unless they have specific previous work in these areas)

- sometimes looking down on gruntwork/rote as beneath them (like the view that pure mathematicians have towards applied math or statistics)

I think that list of positive attributes often outweighs the potential negatives, especially during an internship or in the first year or two of someone's career. After that, I would expect many non-competitive programmers to have picked up on some of those advantages (code correctness, learning new concepts).

I've tried to steer my own interviews away from algorithms (especially DP) and focusing more on giving problems that are relatively straightforward, while still being complicated enough that someone has to write precise code and identify/fix a few edge cases.


Adding to the good points here, I think coding competitions superstars are also potentially more likely to have a diva-complex, while others are more likely to have an impostor complex when they get into a company like Google. The right amount of impostor complex (where it enables you but does not cripple you) is actually very helpful as it helps you learn and better yourself.


Question?

Does being good at programming competitions make you good at interviewing for programming jobs?

Obviously there's some irony in that question. But I also think there's some truth to it as well.


As a former ICPC world finalist who later worked at Google, I would say that practicing for programming contests is almost like cheating as far as Google interviews are concerned. The Google interview format is almost identical to the ICPC practice sessions I did in college.

I've seen this video before, and IMO the extreme similarity between programming contest questions and Google interview questions could explain the negative correlation. Specifically, borderline engineers with programming contest experience are more likely to get hired than borderline engineers without programming contest experience, and therefore the set of Google engineers with programming contest experience includes more borderline people than the set of Google engineers without programming contest experience. Thus the slight negative correlation.


When I was in high school, being good at programming contests meant picking up a bunch of habits that you'd never use at work -- memorizing the same twenty #include lines that include every conceivable STL data struct you'd need; using single letter variable names; no comments whatsoever. If you and the next person come up with the solution at the same time, you might lose simply because the other person could type faster. You have to un-learn these habits for industry.

Just about anyone can get exposure to a set of representative coding questions (see: the USACO training robot, or Cracking the Coding Interview), but training for these contests means spending XX hours a year under a time limit trying to write code from memory (because you don't have time to look things up in the manual).


I expect this to be the true root cause of the negative correlation as well. Technical questions are good at rooting out those not good at data structures and algorithms, and good at promoting those that do. However, you can be excellent at solving ICPC questions and not know anything about documentation, object oriented design, OS, databases, anything web related, etc. It's a very specific skill and practicing programming contests optimizes for it.


This is the best explanation of the phenomenon I've read. I think you're exactly right.


In my opinion, based on my and my friends' experience, being good at programming competitions clearly makes you very good at interviewing.

I don't think that being good at contests makes you bad at the job, though. Consider the following scenario: assume that skill of being good at contests is completely orthogonal to being good at the job (so the real correlation coefficient is 0, instead of negative, observed by Norvig). Then, since it's easier to to get hired if you're good at contests, the observed correlation coefficient inside a single company will be negative, due to selection bias. Depending on the effect sizes, the selection bias might even make observed correlation coefficient negative even if it's positive in reality (which I think it is).

For a more intuitive explanation, consider a used car market buyer who has preference for black cars (this corresponds to algorithm-based interview process of companies like Google or Facebook). Even though the color of the car should have no correlation with its quality, the black cars owned by that buyer will more likely be worse quality than non-black ones.


> Depending on the effect sizes, the selection bias might even make observed correlation coefficient negative even if it's positive in reality (which I think it is).

This is well known in SAT scores. SAT math and verbal scores have a very strong positive correlation in reality; inside almost all colleges they are negatively correlated, because colleges impose threshold effects below and above. (That is, colleges admit students in a narrow band of SAT scores, rather than admitting students who are above a minimum threshold.) You can stylize that into the idea that total SAT score is constant for the students in any given college, which easily explains why component scores would be negatively correlated within colleges.


Also important to remember that good at programming does not necessarily equal good at job.

Excelling as a communicator, being an empathetic person, and having great interpersonal skills are just as (perhaps more) important than how well you can code.


Winning the dunk competition does not make your team be an NBA champion...


But winning the 3pt shooting contest is a pretty good proxy for shooting ability.


And I'm so glad that Golden State Warriors didn't just Hire Kevin Durant for his 3pt shooting abilities. He'd lose to a lot of people who are better 3pt shooters.


True, but they do already have the prior two years winners in Curry and Klay.

In fact 5 of the past 7 years 3pt shooting contents winners were present in the NBA finals this year.

James Jones, Kevin Love, Kyrie Irving, Marco Belinelli, Steph Curry, Klay Thompson, Eric Gordon

That's probably not a coincidence.


So what happened to teamwork? We see dominant teams last 4-5 years and then fizzle. Wooden had 10 championships with 7 in a row. Celtics had 8 in a row of 10 championships.


Exactly! I'd hire a Durant (a high performing team player) over a sharp-shooting 3 pointer genius (fast algo coder)


Another sports metaphor: Being a good sprinter doesn't make you a good marathoner.


http://ruberik.blogspot.com/2017/07/no-programming-competiti...

It looks like it wasn't "being good at programming competitions" that was negatively correlated with job performance.

It was "participated in programming competitions".

And there are some more "how to interpret machine learning models" caveats in that blog post.

It seems to me the biggest factor in explaining this is that the people who are just below the hiring line but participate in competitions get a bump over the line. Since there are more people just below the line than above it, the "participates" group is bottom-heavy, producing the correlation.

I do a lot of interviews, and it seems to me that lots of people with experience perform below how they "should", because they're not practiced at solving problems from scratch, they work all day on modifying larger systems. Programming competitions would fix that for them, as would most open-source hobby projects.


I don't find this surprising. I played around on Top Coder enough to reach the first division but the problems presented were in no way related to my day to day work. While the people that did very well (I never got anywhere once I hit the top grade) might have had an excellent command of very specific data structures and techniques, non of the code would have been very useful in the real world.

If I was interviewing someone, a career as a competitive programmer would not be a detriment but it would not count for very much overall. We are looking for creative thinkers that can work as a team.


The market is pretty good at sorting this out and I'm sure programming competition winners are fairly valued. I've seen more Berkeley, Stanford, MIT, CMU and IIT degrees rise to the top than other schools. I haven't seen any top coders. Maybe they're there and I haven't noticed. But I haven't seen them on resumes that I've culled through.

It's probably something you might list on a first job resume but further on down the road, you'd just list your work and your education.


All job interviews rely on proxies for whether someone will be good on the job or not, because the only way to know is to actually hire them.

All proxies are inaccurate.


Also, everyone is convinced that their proxy is the best achievable, although of course they test it only against false positives, not false negatives.


> everyone is convinced that their proxy is the best achievable

In my experience, people merely hoped their process was good enough. And it usually was. The "we only hire the best" is marketing propaganda and they know it.


Yeah. The best proxy is just to hire somebody as a temp contractor for a month or so if both parties can afford it.


My viewpoint is taken from the context as someone who is a reasonably seasoned developer returning to the job market. I never did programming competitions in university, though I surely was someone who fit the profile (computer science guy with a discrete math bent). As part of my recent prep for a job search, I joined one of the programming competition websites and did a couple of contests.

I posit that one of the dangers of spending a lot of time doing programming competitions and becoming very proficient at them is that, perhaps, you can come to believe that "true" programming, some sort of Platonic ideal of programming, is about coming up with the clever insight that solves an algorithmic puzzle.

But, in fact, a fair bit of _commercial_ programming is down and dirty, with databases, and user interfaces, and a lot of the time is really just shuffling data from one place to another, maybe filtering it or combining it with another set of data.

And that's just at the beginning of your career. Later on in your career, success means being able to work at larger scales in a team. That means organizing the code in a way that supports the efficient development of the codebase by individuals like yourself, by your team, by the development group as a whole... And at the architect level, you perhaps are looking at designing the system to support the efficient operation of the entire organization.

So I can easily believe that success at a programming competition does not correlate with long-term success as a software engineer in commercial software development. The two are really very different.

(Btw, I actually found the competitions that I did to be fun, but mentally exhausting. I'd say go ahead and do them, especially if you have an inclination for those types of problems. Just be prepared to use a different mindset for commercial software development.)


I see a lot of comments speculating that there may be something wrong with programming competition winners, but this result might be a statistical curiousity unique to Google. It's possible that for most companies, good at programming competitions could still be positively correlated with job performance. For example, what if most people whom Google hires could have won some programming competitions if they had wanted to (a situation that most other companies are probably not in)? In this case, 'won programming competitions' could be data that is a lousy filter for Google, yet a good filter for other companies.


Good point, in essence it sounds like one theory people have is that competition winners have over-specialized, but your theory is that there could be floor effects in the range of ability at Google (i.e. the floor is high, many people at Google could win competitions).


Completely agree with Peter. I'm shit at those competitions (maybe not the worst ever) but I think I'm pretty good at my job. But I think there are also some who are good at both.


In general, I think there are a lot of things like whiteboard interviews and coding competitions where employers would prefer "good at X means good on the job", but are reasonably happy to settle for "bad at X means bad on the job."


When I am interviewing:

Candidate having competed in programming competitions - big green flag.

But, if success in competitions is, in their view, their biggest asset - amber flag.

I would extend this to most competitive endeavours.



well if you're not good at programming contests would that entail a contemplating/slow programmer?

I'd rather pick an algorithmist and teach him/her to reflect than a so called "reflectionist" / "slow coder" and teach him/her how to solve algorithmic problems.

I'd be interested in knowing the guy's (catonmat) own reflections and experiences too.


Depends also on what job I guess?


Overfitting?


The title of the post understates the claim in the link, which is that, "being good at programming competitions correlates negatively with being good on the job".


I tweeted this link this morning (apparently the first time it's been posted to Twitter, so perhaps how it ended up here on HN).

I did so in response to the CTO of Kaggle tweeting "Super confused why we still use resumes. Get 100x the signal from domain profiles (GitHub, StackOverflow, Kaggle, etc.) & real work samples", which ... where to start [https://twitter.com/benhamner/status/883137638084956160].


no the statement is ""being good at programming competitions correlates negatively with being good on the job, conditional on passing Google interviews" which is completely different statement.

"being good at programming competitions" hugely positively correlated with "begin good on the programming job" unconditionally


Do you have... any kind of evidence to support that claim?


The overwhelming majority of people who can't code would fail horribly both at jobs and competitions, this effect will drown out everything else no matter how the distribution looks for the few percent who can code.


They are two types of business, first type business to business, software made to run internal in a company, the business run on this software, generally in this kind of software you use enterprise frameworks, the second type are business to consumer where anyone can made an account and use application, some software as a service are like that, they are too other types of web applications business to consumer.

When you allow that anyone to make an account and use your application sometimes you couldn't rely on enterprise technology and it's better to write everything from scratch or to customize open source solutions.

What they are testing in algorithms contests: 1.) the algorithm is correct with various test sets 2.) the performance of the algorithm, running time in milliseconds, there you should to know how to measure time complexity of algorithm with O(N) 3.) memory consumed, there you should know how much memory you are allocating for variables, for example a variable of type byte consume 8 bits

I think that programmers which has prizes in algorithms contests are suitable for business to consumer applications because there they will find a challenge, this kind of programmers will be bored to dig in enterprise frameworks.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: