Hacker News new | past | comments | ask | show | jobs | submit login
We looked at how a thousand college students performed in technical interviews (interviewing.io)
178 points by leeny on Feb 13, 2018 | hide | past | favorite | 96 comments



I'm leaning towards blaming this on the interviewing process.

Last year I spent some time interviewing job candidates and what I learned from this is that the recruitment process in IT, in general, is broken.

A typical candidate is first faced with some kind of task to weed out the lower 20% of applicants. After that comes the proper interview during which the candidate is asked various questions like what was their major or how is the abbreviation "SOLID" expanded.

In my experience, none of this correlates highly with future performance and this may be the reason why regardless of the school they attended those students performed roughly the same.

Time and time again the one thing that seperates the best from the rest is their ability to perform code reviews. I have yet to find somebody who's a poor developer, but a great reviewer.


I believe it.

Real programmers read far, far more code than they write.

Reading code, and writing readable code, are perhaps the two most valuable skills in the profession.


I would extrapolate this out one layer higher and say that communication is the most valuable skill. This expresses itself both verbally and in written form.


That's a good way of looking at it. Code is also a form of communication. By writing code, you're communicating the the computer, with other developers who will come and read it, and with your future self who may not remember what you were doing.


I'd assert reading code being far more valuable, especially when dealing with others' unreadable code.


Interesting. Do you have candidates review code?


I have done this.

Print out a page of code and review it with a candidate. Have them explain what it does, or how they might go about figuring that out, and what kinds of improvements they might make. Include some glaring bugs or code style problems if you want to ask those sorts of questions.

I feel like small fragments of code work well for refactoring questions, whole programs tend to have a bunch of boiler plate code, but going over a bigger program to find the real core features quickly is also a valuable skill.

I've done this with SQL too. Printed out a real statement from an application and ask about performance, optimization, what the application might be doing, etc.

Or load a 100,000k line application in their favorite editor and ask "okay, try to find the frobzing subsystem, I want to know how it frobz bazzes". (I haven't done this yet...)


We have a sample repository with a simple, but broken, web application. We ask our candidates to review the code as they would if a coworker had submitted it as a merge request.

it gives quite a bit of insight about what aspects they latch onto as issues of note. Do they catch the insidious, subtle issues or just the low hanging fruit? Do they understand what stdlib function arguments expect as input? Do they just pass your review through a linter and call it a day?


I have and it has, so far, never mislead.


> What this means is that top-tier students are achieving the same results as those in no-name schools

This is an misleading conclusion that ignores a HUGE selection bias. I doubt top MIT CS students, for example, would feel the need to practice coding interviews on interviewing.io


We have a bunch of MIT students. The sad truth is that everyone is scared of technical interviews.


I went to MIT and knew a good number of people who were confident in technical interviews. The problems in interviews were much easier than what I got in class, or even CTY CS classes in middle school.

While there's definitely good people at non-elite universities, selection bias is at play here too.

For example, in the senior year case, you're just looking at people who were concerned enough about interviews to go to interviewing.io during their senior year. Most people I knew at MIT had a full time offer they were happy with from their Junior year internship by their senior year.

You can actually see the impact of this in your graphs too, notice that the % of people with a 1 or a 2 in elite schools increases for senior year vs junior year. People who have been doing well in technical interviews and have good return offers won't spend as much time preparing.


>CTY CS classes in middle school.

The hell? What sort of problems are you getting?

Were you doing edit distance and word break and shortest palindrome in middle school?


Was a long time ago so I don't remember them all, but some examples I do remember:

* Print pascal's triangle up to the n-th level, with proper spacing (so that it looks like a triangle).

* Make the game Go for 2 players. I got a sub problem of this during my Google interview (determine whether a piece is captured).


When interviewing in tech jobs I had : - build an asynchronous communication queue from scratch (which implies building semaphores) - optimize the drawing of a sphere on a screen

the rest was more regular (I didn't apply to super elite jobs, and both the companies with these two hard tests failed, so it's no indication neither about them, neither about me)


> - build an asynchronous communication queue from scratch (which implies building semaphores)

How long were these interviews? Seems like something I could do in a few days not a 2-hour interview!


It lasted two hours. Now the way it worked was that I was doing on my own but I had to constantly explain to a "peer" my line of reasoning. So I was not alone in front of a white paper :-) This was an extremely interesting wy to interview. I felt I had the opportunity to demonstrate my skills on a difficult problem (which I didn't solve). I didn't remember the details of Dijkstra's algorithm but I had the opportunity to show that I can approach a complex problem in a logical and efficient way (well, that day, that kind of problem; don't ask me to work on helicoidal tomography :-))


That says (nearly) nothing about the representativeness/randomness of the sample you got.

And, please, when showing graphs for comparison purposes: make the Y axis scale identical, or the results are quite visually misleading.


I would not have noticed that one chart in each set of 4 has a different scale, so things that are the same height aren't actually the same number. It certainly looks nice and tidy but it is indeed misleading.


>I doubt top MIT CS students, for example, would feel the need to practice coding interviews on interviewing.io

If anything, those students would be the ones working the hardest to study for their tech interviews.


But not necessarily on interviewing.io.

I am not from MIT, but instead a top university in China. I was accepted into Google last year, and I never even heard of interviewing.io before.


So? Ignoring that this is a single datapoint, it could be that interviewing.io is relatively unheard of across the board (which would say nothing about the relative frequency of top-tier vs. not-top-tier students using it), or less heard of outside the US.


You are right about it. And the logical conclusion is that we can conclude nearly nothing from their data.


You bring up an interesting point about selection bias. Did you consider the other angle that top-tier students at no-name schools overrepresent the data from those schools? As in, low-teir students, regardless of school, are unlikely to practice.


> I doubt top MIT CS students, for example, would feel the need to practice coding interviews on interviewing.io

What brings you to this conclusion?


I also want an answer to this. Being a top CS student at a top CS school does not mean that you are naturally good at technical interviews.

I'm not saying technical interviews are good or bad. I just think it's a different skill from what a student generally learns.


Indeed. Being an MIT dad, I know a few course 6 undergrads. Bright kids get social axiety, too. And can brain-freeze at inappropriate times. And may be better in a quiet, thoughtful problem-solving mode than the kind of TV quiz show that modern interviewing has become.

I just don't get the modern interview process. Why are we looking for lint in human form? I'd rather see someone sketch some psuedo-code for an interesting problem with several acceptable solutions.

I think I learn more by asking a fuzzy, underspecified problem question and seeing what clarifying questions I get back. I want to know if the person can wring a spec out of a customer - coding it up with all the semicolons in the right place is the easy part.


In antoher comment on this discussion :

>> Most people I knew at MIT had a full time offer they were happy with from their Junior year internship by their senior year.

This leads me to think that top-universities student are already "chosen/watched" by companies before the recruitment phase so both companies and student are much more confident they can work together.

So basically, top companies just recruit people the best way : by looking at students while they are still student. Which is fine, except the top universities are, I guess, the most expensive ones. But if that is true too, then that's another bias : companies have rich employees, so they hire people with the same set of personal values...


> Being a top CS student at a top CS school does not mean that you are naturally good at technical interviews.

Not necessarily, of course, but it'd be pretty surprising if it didn't trend in that direction.


Thanks for making this point.


Admittedly, this is a bit of an exaggerated conclusion. It is definitely possible that some subset of top CS MIT students would get anxious about coding interviews and would go to interviewing.io to hone their skills.

However, the following types of top students are significantly less likely to practice on interviewing.io, for example:

- Seniors who have already done internships at top companies (Google, FB, Stripe, etc)

- People who excelled at programming contests and generally enjoy algorithmic challenges: IOI, TopCoder, Project Euler, etc.

- People who have built awesome open-source projects that got significant recognition. For example, Feross Aboukhadijeh was a Stanford student who built Youtube Instant and immediately received an internship offer from Youtube founder Chad Hurley [1]. I doubt people like him would need to practice on interviewing.io

- In general, students who are regarded as top n% of MIT/Stanford/CMU CS are inundated with recruiting pitches (I speak from experience as a CS graduate from one of these universities. Most of us constantly received an overwhelming number of e-mails from recruiters, many of them offering to get us to get fast-tracked in the interview process).

Granted, not every top CS student at MIT/Stanford/CMU has done competitive programming or built really cool stuff that's widely recognized. But a significant percentage of the top n% of students do have this kind of background (admittedly, this is somewhat of a circular definition of "top" student). That's what I mean by selection bias.

[1] https://feross.org/youtube-instant-media-frenzy/


> What this means is that top-tier students are achieving the same results as those in no-name schools.

When NYU and Arizona State are "Top 50" while Michigan State and Vanderbilt are classified as "no-name schools", I question the meaningfulness of this blog's baseline and what it means for an interview to be technical.


I think the bucketing of schools by overall rank is about as good as arbitrary. Why not rank by strength of CS program? If we use PayScale's "Computer Science Majors by Salary Potential"[1] as an indicator for interview success, it looks very much like the CS rankings.

[1] https://www.payscale.com/college-salary-report/best-schools-...


That's not very far off from general prestige. Harvard is too low, but only UCSD, UW, and maybe CMU wouldn't be on the list if it were that instead. And flipping it around missing from that list are University of Chicago, Johns Hopkins, and maybe Northwestern. (Also Caltech but it's tiny.)

For Phd programs there are some real surprises, but for undergraduates there's not much in the way of major specific strength differences. That's largely because the strength of an undergraduate program is mostly a function of the strength of the matriculants not faculty.


> Why not rank by strength of CS program? If we use PayScale's...

CS alone is far from representative of tech industry in the aggregate.

Furthermore, PayScale's list clearly doesn't control for locality bias--e.g. consider Purdue #191, UW Madison #138, UM Ann Arbor #88, UT Austin #79 to name a few; or what Duke #10 while UNC Chapel Hill #67 even means?? How do top programs which feed industry around tech hubs like Austin and Research Triangle meaningfully compare to the Bay Area and NYC if unadjusted gross salary serves as the sole basis of value?


Also, those numbers are absurdly low in my experience. It claims that Purdue's starting is $64,000, but I don't know a single person who's graduated and started at less than $75k, and I know plenty more that have started well into six figures.


Wait - are you talking about basketball?


After running over a hundred interviews within the last few years, this doesn't surprise me at all. Interviewing is a skill separate from school, and if you train for it, you can become good at it, regardless of your pedigree.

I actually think studying for the SATs is a better comparison - it's a test that doesn't exactly translate to real-world performance, but has huge bearing on prospects, and often the best advice for acing both (tech interviews and the SATs) is to simply do as many practice tests/problems as possible in the areas you're weakest in.


I was always under the impression that SAT scores correlate strongly with IQ, not with practice. I had almost a perfect score[0] on my SATs, and I only used about half the time available, and I did zero with a capital Z practice problems beforehand -- many other kids took practice tests every weekend and still performed abominably.

[0]: I only got two questions wrong on the entire test, both math problems. Since I was practically unstoppable at math back then, I always suspected that there were errors in grading.


For the record, while I agree that evidence suggests that SAT correlates strongly with IQ, your comment is a great case study in how SAT scores correlate poorly with tact


Tactful people are boring. :)


A large portion of the SAT is knowledge of English vocabulary which obviously has nothing to do with IQ.

The mathematics questions nearly all use the same basic rules and are presented in the same format, and can easily be learned and trained.

SAT prep courses are ubiquitous and expensive because they work. Do people without training do well? Sure. Do people with training still do poorly? Sure. But a good SAT question would be "does this mean that training doesn't help or that there isn't a correlation between training and success?" because the obvious answer is no.


Vocabulary is a component of IQ.

https://en.wikipedia.org/wiki/Intelligence_quotient

>The many different kinds of IQ tests include a wide variety of item content. Some test items are visual, while many are verbal. Test items vary from being based on abstract-reasoning problems to concentrating on arithmetic, vocabulary, or general knowledge.


That's odd, when I learned about it a long time ago I was told it was intended to measure "inherent" intelligence, not the level of exposure/memorization of a massive set of (likely trivial) data points.

Vocabulary very clearly belongs on an achievement test, not an aptitude test (as I imagined IQ to be).


The prep is for turning a 1250 into a 1400 or so which basically determines if you get into Stanford or not.


For the record, I practiced math the most, writing second and literature last and scored exactly in that order. I also tracked my progress before and after practice and I got dramatically better. I really don’t think the SAT measures anything useful other than how well someone can solve SAT problems.


SAT questions, especially the math questions, follow an extremely predictable formula that is told to the world in advance.

Can you practice to get good at quickly solving basic algebra problems? Yes. Which means that you can practice to get better at the SAT.


I don't see how it would be possible that SAT scores would not correlate with practice. The fact alone that SAT correlates with IQ is proof, because IQ test scores themselves can be improved by practicing.


I find training for interviews to be funny. Passing an interview is not correlated to doing well in a job. The purpose of an interview is to determine if you will improve the team and whatever the team is doing, not prove to be a whiteboard genius. If you design the interview to get whiteboard geniuses that's all you will hire.


People have forgotten how to train their core social and psychological skills like resilience and confidence. Unfortunately, these are not things we are born with or can code our way out of. These are things we have to train very hard at similar to how martial artists or athletes approach their training. The thing most programmers and knowledge workers in general are missing is true belief in their own skills (you have been doing this how long? come on snap out of it) and the resilience that comes from rejection after rejection, and failure after failure. After a certain point in your experience you should come to understand that raw skills are not really the end all, be all. If you are not the guy then you are not the guy no matter how many degrees you have. Sometimes the guy with the PhD from MIT is not the right guy because he doesn't connect with the rest of the team, or the vision, or whatever, simple as that.

Also, if you're testing only for specific skill sets rather than aptitude and interest then you already failed as a hiring manager.


Amen. Also "hire for strengths, not to avoid weaknesses".


> Indeed, statistical significance testing revealed no difference between students of any tier when it came to interview performance.

Ah, the classic confusion between not finding enough evidence to prove there is a difference, and finding enough evidence to prove there is no difference.

The juniors from elite schools in particular have fewer 1s and 2s and more 3s and 4s than the other juniors. Really, you found statistical evidence that they aren't from different distributions? I'd love to see it.


I think this is a bit specious, at least from anecdotal evidence. I know bunches of people at my school (Top 50) that have interviewed at companies that ask leetcode-style questions yet very few get offers or even make the 2nd round or onsite. If you look at LinkedIn you'll see the vast majority work at Cisco, IBM, SAS, Amazon etc and not Google, Jane Street, or Facebook.

The quality of the cohort and, to a lesser extent, the quality of their DS&A classes matters quite a bit.


Is the Amazon interview process no longer considered a top-tier difficult interview?


As someone that has gone through it, no. It is possible to get intern and new grad full time offers by passing one online debugging test and one coding test (in rare cases a single one!).

Also the comp is a league below the others I mentioned.


*with that said, Amazon does work on a lot of really revolutionary stuff that I definitely wouldn’t be able to work on in 90% of other companies. Really enjoyed my internship there.


Maybe what they've proven is that technical interviews are not adding much information to the hiring process. This pretty much matches with my experience that quality of interview is at best only very loosely correlated with actual ability as a software engineer. Much more relevant is direct experience working with them. School quality is a strong signal.


In my experience, school grades at good schools are filters for non-bozos. Beyond that, nothing.

The best programmer I ever met was a musician with no college.


I have had the opposite experience.


Charts don’t have the same scale. As the result, it’s hard to visually compare these - for example, for juniors, the shapes are similar for score 4, but top tier school students are above 20 mark, while the rest are below.

Interviewing is expensive, especially if you hire the wrong person, and if a given population tends to cluster around higher scores, that what you’d pick first.


Yeah, why didn't she normalize results and combine charts for easy comparison?


I started wondering why CS interviews aren't more based around reputation and experience, like other jobs.

I've seen a lot of bad devs get hired into places with supposedly high bars, or devs being let go and then ending up at Google next. While some places often let go of good devs, I'm talking of cases where I believe it was justified.


We all know that interviews are terrible. Wouldn't that be the problem? Then students from top-tier universities achieving the same interview score as student from lower-tier universities is meaningless. The interviews fail to capture what matters.

The best jobs I had and also jobs I performed the best I got through credentials and experience, not scribbling on a whiteboard.

If you went to top 10 school in the world, you worked hard (at least it used to be this way). I don't need to look at specifics of what you did but I know you needed grit, good work-ethic, personal time sacrifice, etc to get through. Unless you're super smart. In either case it adds to your personal brand.

I think that's what perpetuates the system. Given unlimited time I'd be happy to give everyone a shot. But time is precious so stick to what you know? I know how things are in the Uni I went to. If a CV landed on my desk form someone who did the same course, I'd put them on top of my list.


Replacing tech interviews (even as quirky as they are) with credentials and school rating is even worse as it weeds out the wast majority potentially prospect candidates that had no opportunity to attend top universities. Of course, graduating top university shows that a person is capable, but it doesn't automatically means that (s)he's better everyone else who graduated lower ranking schools.


This sounds reasonable enough, but on the other side, so is the observation that often people who get into and get through top 10 schools are actually ones that did not work hard.

The point of using interviewing.io online technical interview as the first step of the process is that it makes your "I'd be happy to give everyone a shot" take far less time than alternatives. Still can't get unlimited time or perfect fairness, but it's better.


I went to a "top 10 school." I'd say that only maybe 20% of the students (in CS) were people that I would want to hire if I were running a company. A lot of top school students get accepted because a) mom and dad are rich, sent kid to feeder high school, pushed kid to do ECs/do tutoring/get help with essays b) they fit into some admissions "bucket" like {has inspirational story} or {huge volunteering effort} or {also amazing artist}.

The worst students were always the ones that went to feeder schools and had wealthy parents, which are incidentally the majority of students at top schools


I wonder if your experience taints your opinion here. I don't know because I barely went to a state college for a minute before I got the job I wanted and dropped out. It's possible if you spent the same time at an average university first you might find 50 or 80% of the "top 10" students to be worth hiring.


Exactly. Those a great examples, and you could add the ones of students who are not interested in being engineers. Of course, technical interviews and really any other recruiting process, are not perfect, but it gets one closer.


This tells us nothing about the talent or intelligence of the candidates because these tests are bullshit.

I wish companies like this didn't exist to enable these types of interviews.


i wish i can upvote this more, bless this comment to the top


This really makes no sense. Even if every school had identical curriculum and teachers, students who manage to get into an MIT or Stanford have to have work ethic and prior experience that places them well above the average CS student. The fact that these students have already been vetted by a respected school is one reason why I think companies prioritize them.

I am all for big data analysis disproving common conceptions, but this feels off.


> get into an MIT or Stanford have to have work ethic and prior experience that places them well above the average CS student.

You're comparing apples to oranges. The worst at MIT is not necessarily any better than the average at some state school. I've seen more than a few people with bachelor's degrees from UCSD and UCLA be below the class average at Cal Poly for their second bachelors or masters degree. If there's any metric you can rely on it's that their average is usually better than the average of another school.


Just goes to confirm my own experience with grads that the difference between schools is largely cohort, not quality of students.


The graphs are super confusing. Without reading the story it's hard to figure out if the point is their sameness or if you should keep trying to figure out the deltas.

Given that sameness was the point - just make a single graph with 4 colored lines.


There is one problem with conclusions, they assume that interview score is the same as real world success of a candidate. It is not so, interview result correlates with future success but there is no strict determinism. So companies prefer students that successful in interview AND come from good school, rather than just rely on interview results. It allows them to maximize probablilty of the finding a good employee.


I suppose a bit of methods nitpicking is in order. Wouldn't it make a lot more sense to use equivalence testing if one wants to write a statement like "Indeed, statistical significance testing revealed no difference between students of any tier when it came to interview performance." AFAIK statistical significance testing cannot reveal this at all.

Of course there is no real explanation of the method that was used besides the fact that it was some sort of "statistical significance testing". Equivalence testing makes more sense to me if one wants to essentially say that MIT is Aspirin but Ohio State is a generic drug that is similar enough to work just as well (for a reasonable definition of similar enough).


On the other hand, they discount self selection bias, and the bias of "qualifying".

Plus the charts don't even show up, probably due to load.

edit: And the bias of the interviewers who use their platform, who may not be able to attract top tier talent, and are happy to get even barely competent people... It's impossible to say since the charts don't show up and we know nothing about the methodology and motivations of the different actors.


Graphs fixed. Just added static image fallback for when Plotly runs into issues. Sorry about that, and thanks for calling it out!

Re qualification bias, we call it out explicitly in the article. Without the coding assessment before the real interview, the result may very well have been different.

Lastly, re the bar, our interviewers are coming from Google, Facebook, Dropbox, AWS, and so on.


There is huge selection bias. The talent that already perform well in technical interviews most likely spend less time practicing (and using interviewing.io)


I’d actually assume the opposite—- that like in most life situations, those who “wing it” will generally do worse than those who practice.


Would it be helpful to update the post with school tier distribution?


I'd be interested in seeing this.


The charts were moderately cool when one of them loaded the first time I opened the page, but they're not really superior to static images, especially when they don't load at all. And when they are elements that your whole article hinges on, well...


Fixed! Thanks for calling it out.


Oh come off it, for an ounce of credibility can they maybe keep the scale of their graphs the same? And the link to the "statistical significance testing" is nothing of the sort. I'm calling bullshit.


I failed my first 2 tech interviews and Google and Snapchat leaving me feeling cheated by the dumbass whiteboard. Whiteboard tests are tailor-made for the inside the book thinkers. The ones that stop learning when the chapter ends and soon forget when the next subject begins. Computers are the thing that gets you to the thing. You want people who can see that the software they are working has a broader impact beyond 1s and 0s.

PS. 5 Years later, I'm now the CT0 of my company managing 50 people. So srs, not srs Googs you missed out =P


Not that I think that whiteboard interviews are the best approach, but I'd say you're being quite too harsh. In particular, the type of student that you mentioned who soon forget every topic they ever learned would surely not do so well on a whiteboard interview. Or if they were able to "relearn" every aspect of algorithmics in the week before the interview then they can't be that bad.

In any case, a good(!) whiteboard interview absolutely can focus on all of that broader impact you mentioned and get the interviewee to discuss the UX/efficiency/maintainability/etc of what they're creating.


While I can see your point, I am of the mindset that Computer Science is too vast to expect someone to memorize it all. Furthermore, it's not even a memory test its a form of hazing, to be honest. It has little indication of how well someone will do in a role as dynamic as software engineering. Im certain I could make you look like a fool in front of a whiteboard and im sure you could do the same to me. But have we actually proven anything about who is smarter or more capable? No. I look for people who know where to find the answer even if they don't know it off the top of their heads. At least that way I know I won't spend half my day answering their questions if I hirer them. The kids who spend their time in college depending on the book and the professor to hold their hands aren't the ones who succeed in the long term. It takes ingenuity and creativity for that. That's who I want on my team.


As far as I understand, the interviews they conduct are preliminary interviews, and questions on such interviews are much easier than questions on on-site interviews. So, basically they test for simple CS/programming tasks which any person who studies CS should be able to perform.


A serious question- is there a syllabus of interview questions ? That is something that says the most important subject areas are X, and this puzzle covers X

(I am not sure how one decides if finding shortest substrings is important or not ... perhaps most common usage?)


I would love to see more recruiting at state schools and community colleges etc from tech companies. There are a lot of hard working people who could be great to work with but didn't have the family background they needed to be noticed.


The graphs/images do not show for me. Chrome 64.0.3282.140. Win 10


Looks like the server is overloaded... images in middle of the article aren't loading for me.


Garbage in, garbage out.

What is used in divvying up a self-selecting cohort of undergrads performing on technical interviews in 2018 are the US News & World report rankings for graduate schools composed in 2014 "based on a survey of academics at peer institutions" (https://www.usnews.com/best-graduate-schools/top-science-sch... … and I don't think the lack of that link in the article is an accident).

Having a selective grad school doesn't mean much if anything for the standards and teaching in that school's undergrad program, especially for state schools with giant undergrad populations and relatively small grad programs.

For example, Illinois' grad school is selective and highly thought of by professors and thus it's treated as an "elite" school and students of UCSD's undergrad CS program are classified as "top 15".

Regardless, my alternate hypothesis would be "students of a specific level of confidence in their ability to pass a technical interview use interview.io for a limited period of time in which it provides value to them … that population of students receives a certain distribution of scores".


One data point: I was at Georgia Tech for undergrad, now at Caltech for a CS PhD. Both are good schools, but Caltech is definitely considered more "elite". As a Caltech student, I definitely attract attention that I just didn't get at a Georgia Tech student.


People who bother to practice for coding interviews (using the same website) and successfully land an interview do similarly well on coding interviews... That's a lot of selection bias.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: