Google does the same kind of software interviews as anywhere else. They aren't special. You code on a whiteboard and it's supposed to be compilable in C or Java. The process beyond that is just as subjective as anywhere else and is largely based on gut. They depend largely on employee references/friends.
The data they collect is really only to make the process more efficient, not more effective. They reward employees that process the most phone screens in a month. The strangest thing, in my opinion, is that the interviewers usually don't make a decision at all, they just give a rating. Then, a group of people who've never even met the candidate decide whether to hire them based on forms that were filled out.
| Google does the same kind of software interviews as anywhere else. [...]
| You code on a whiteboard and it's supposed to be compilable in C or Java.
It may seem like a strange idea, but not all places interview like this. The company I currently work for doesn't do this and the one I will start working for soon doesn't interview like this.
The reasoning is that my job is not to stand in front of a white board and write syntactically correct code without the aid of an editor or compiler, so maybe there is a better way to screen candidates that directly test the skills they will use on the job.
There's a difference between syntactically-correct code (meaning every single paren/brace is balanced, no dropped semicolons, should compile exactly as written, etc.) and what I call "valid" code. Expecting a candidate to write the former without an editor and compiler is silly.
A good interviewer (and good hiring committee) wants to see that you can write "correct-ish" code: it looks like it would compile modulo a typo or niggling detail, but the algorithm, data structures, and control flow are clear and valid. A good interviewer will also tell you that up front, e.g. "I'm interested in your code, not your syntax". If they don't, ASK!
The problem is: not everyone is a good interviewer, and it's surprisingly hard to teach someone to BE a good interviewer.
I think I agree with all of that. Especially the "interviewing is hard" part. But what's wrong with having someone sit in front of a computer to write some code?
At the company I'm leaving, we ask candidates to do a small project as a first pass. If the code isn't awful, we call them in to pair program with us (we're a pairing shop) on the code they wrote to improve it.
The company I'm joining just has you come in and pair (they're also a pairing shop) on projects their team is actually working on for most of a day.
I find both to be pretty good approaches. Better than having someone write code in an unfamiliar setting (the white board) to solve problems which are often, but not always contrived (write quick sort, etc.), at least.
We've had pretty good success with identifying good candidates since we switched over to this model and the place where I'm starting is a consultancy which is pretty well-regarded in the startup community, including HN, so they've probably had reasonably good success identifying talent.
How big are these companies, how many people do they hire per week, and how many do they interview? Any number of perfectly reasonable interviewing strategies in smaller companies fall apart at big-company scale (1000+ resume submissions per DAY)
Fair point. The companies are small and medium-sized. I'm not convinced that those models can't scale, though. One large-ish company that I know of who still does something similar is ThoughtWorks. They have on the order of thousands of employees and their interview process consists of, among other things, a take-home code submission and on-site pair programming.
If you're getting 1000 resumes per day, I think you would be able to weed out a significant number of them just by giving them a coding assignment to work on. Churning out a resume is easy, but sitting down for a few hours and writing well-designed code takes effort.
They probably can scale up to a point. Thoughtworks has 2100 employees. Google has 45,000.
How would you grade/score ~1000 code submissions per day, though? You could conceivably do Coursera/Topcoder-type automated grading, but that can only get you so far and can't distinguish good code vs. bad code vs. "copied from Glassdoor" code. It might be useful as an initial filter, though. You'd have to constantly be implementing new questions w/ associated grading scripts, though, as the problems would inevitably leak.
This isn't really true, you can use whatever language you want for the programming sections. If you say you only know PHP, they'll find you four interviewers that know PHP.
Interview feedback is not really "filling out a form", either. I take about 6 hours to write feedback for a 45 minute interview. It's very detailed and really covers a chronology of what happened during the interview. Interviewers also submit a hire/no hire score for the candidate, so it's not only the hiring committee that makes the hiring decision. If an interviewer is willing to say "this is the best person I've ever interviewed in my life", that's a big deal.
The data they collect is really only to make the process more efficient, not more effective. They reward employees that process the most phone screens in a month. The strangest thing, in my opinion, is that the interviewers usually don't make a decision at all, they just give a rating. Then, a group of people who've never even met the candidate decide whether to hire them based on forms that were filled out.