Hacker News new | past | comments | ask | show | jobs | submit login
A Humility Training Exercise for Technical Interviewers (triplebyte.com)
349 points by ammon on Feb 4, 2019 | hide | past | favorite | 132 comments



For the past several years I've become deeply involved in interviewing at my company (hint: it's big and is really good at things like search and ads). I've done hundreds of technical interviews here, and I also teach a class to train employees how to interview. I'm also involved in evaluating candidates who have gone through the interview process.

In my position, I get the occasion to read a ton of assessments written by interviewers. Some of the most striking assessments are the ones where the interviewer is cock-sure that they completely nailed the candidate's utter and complete incompetence. It's usually an interviewer who's been asking the same question dozens of times over a year or more. They've seen every variation of performance on the question, and they've completely forgotten what it was like for them when they first encountered a question like that.

It's total lack of empathy at that point, and if the candidate doesn't exude near-perfect interviewing brilliance on that specific question, the interviewer judges them as essentially worthless. Interviewers like that sometimes even get snarky and rather unprofessional in their writeup, "Finally, time ran out, mercifully ending both my and the candidate's misery."

If I were to diagnose one of the causes of this phenomenon, I'd say it is bias. The interviewer best remembers the candidates who performed exceptionally well on their question, triggering the availability heuristic.

There are tactics that I think can be effective to bust those biases. One might be to put an upper limit on the number of times an interviewer is allowed to ask any given question. Once they've asked maybe 20 or 30 candidates the same question, it's spent. They have to move on to something substantially different.

There are some other experiments I'd like to run. One of them is to have interviewers go through a one-hour interview themselves for every 50 or so interviews they give. Maybe match up an interviewer who has a track record of being especially harsh on candidates for not giving a flawless performance on the question they've been asking for a while. The idea is to see if we can't bubble up some empathy.


I wonder if one way to help this bias is to have interviews be an exercise in teamwork rather than a lopsided relationship as it is now.

What I would suggest is that when the interviewer and candidate get reach the "quiz or coding exercise" part, have them pick a question from a website that provides a question at random, and let both work together towards a solution.

This matches more closely with what they will end up doing anyways if the candidate is hired, and will remove the "I've seen 20 different ways to solve this" bias while also generating empathy for the candidate when the interviewer him/herself also struggles with a fresh problem.

Of course, the interviewer can try to lead the candidate and can hold back from telling the solution right away if s/he can clearly see it, but if not this could set the stage for a fairly realistic way to sample all kinds of qualities, from technical, to communication, to empathy, to teamwork, etc...

I think it would also be less stressful from the candidate's point of view: a candidate often feels like an interview seems unfair because s/he is being asked about something that the interviewer has had a chance to review and prepare much in advance.

When interviewers ask trick/complicated questions, I sometimes wonder what would happen if they were to ask the same question to the rest of their own team. Are they expected to answer it well? Would they know? Think about the places you've worked and the questions you've given at interviews: do you think your own colleagues would have aced them?

Obviously, this doesn't completely remove the additional stress on the candidate, since the interviewer's job isn't on the line, but I think it would provide a more balanced assessment of the candidate's abilities and personality traits.

In practice, I'm not sure interviewers would be very receptive to such method, as it could turn an interview into a stressful event for them.


I've had a very similar idea to this as well.

It all stemmed from one particular interview I had years ago. The interviewer presented me with a very simple problem, I solved it. He then stepped up the difficulty a bit. This kept happening and as it got harder he started acting more like a co-worker where we bounced ideas off of each other.

Granted he knew the solution, but the mere fact that he presented himself not as an interviewer judging my performance but as a co-worker helping to solve a shared problem made that one of my favorite interviews.


I’ve had two very similar experiences as well. They were back-to-back interviews and in both the interviewers started simple, then stepped up the difficulty. During the whole process the interviewers worked with me, not against me. Of course the questions became hard and they were clearly looking for a good answer, but the whole process did not feel antagonistic at all.


That does sound pretty good - what company was it at?


It was for Amazon. I will say I had 4 or 5 other interviews at Amazon that day and most of them weren't nearly as good, one was actually atrociously bad.

They may have changed their interview process at this point though, I only interviewed for them that one time and it was quite a few years ago.


You don't even need to randomly select a question to do something like this.

In the past I was hired on the back of simply sitting down with the team lead, and us pair programming to implement what he happened to be working on at the time.

That gave a good idea of how quickly I could start contributing, how easy I was to communicate with, whether I was going to be a snob about the existing code, whether I have relevant pre-existing knowledge about the technologies being used and whether I had any interesting insights to offer. There was also a more standard interview process accompanying this (which I doubt I excelled at) but the pair programming excercise gave a real world insight into exactly what I had to offer and what I was like to work with.


I once saw a company that interviewed like this. The also paid the candidate for the afternoon. The thinking was if you are working on our codebase you should be compensated.


For the record, half of the technical part of Canonical's engineering interview track is collaborative; multiple candidates join in to troubleshoot in real time a system that is malfunctioning. We gain from that a lot of insight into how people work together and what strengths they bring to a team. I haven't seen something like this used anywhere else.


As an interviewer I'm going to give this method a try. I will also add a step to our hiring process where we solicit opinions from the team about how well they felt the interviewee did at communicating during the exercise, as this is a major part of any real world work.


I don't think the fact that the interviewer choose the same quiz all the time is the culprit (on the contrary i would say, because it helps establish a benchmark and understand the pitfalls of that particular question).

The problem is that getting the answer correctly shouldn't be the main goal of the quiz question. Rather, it is the opportunity to see how the candidate think, and how the person is able to work with someone else (aka the interviewer) to solve a given problem.

It could also be a way to make sure the candidate understands a few concepts of its field (complexity, memory pointers, etc).

Mainly what matters is the process, not the solution.


Back when a was a tech lead and made hiring decisions for my team I would run the questions I asked candidates by the rest of the team in a 1:1 setting. We were checking that people could answer the questions well -- and sometimes when they couldn't it could show us weak points and give us ideas for training.

I always ended those 1:1 sessions by asking if the team member would be comfortable working with people who couldn't get to the correct answer. And if they could, what they would want to see from a candidate working on the question.


I really like the idea of picking a problem together and working through it. I might try that in my next interview!


This expresses a worried thought I've had in the back of my mind for a while.

Power interviewers do a lot of interviews. They can do them in their sleep and crank them out like an assembly line. But assembly lines aren't nuanced, I fear power interviewers lose the ability (or desire?) to assess candidates through their performance instead of strictly assess the performance.

Power interviewers concern me. I think they end up with too much influence over a company's hiring practices.

I've heard of people at big companies who conduct thousands of interviews a year. This gives them a lot of sway in company hiring practices and culture.

The problem, though, is these large companies are hiring at scale. Growth + attrition yields a lot of hires. Google itself just announced in their earnings call nearly 20k more employees in the past year. That means 100k+ interviews conducted. It's hard to have every interview stay personal an nuanced at that scale.


> I've heard of people at big companies who conduct thousands of interviews a year

I don't disagree with the rest of your thesis, but this seems off by an order of magnitude. They would have to conduct 4 interviews every working day to reach even 1000 interviews. Counting the time needed to write feedback for every interview, that person would be a full-time interviewer who occasionally writes software/does product management/project management.

Unless you're speaking of a small group of people who conduct disproportionately more interviews than everyone else (Pareto distribution) - a dozen interviewers could easily rack up 1000 interviews between them.


> I don't disagree with the rest of your thesis, but this seems off by an order of magnitude. They would have to conduct 4 interviews every working day to reach even 1000 interviews. Counting the time needed to write feedback for every interview, that person would be a full-time interviewer who occasionally writes software/does product management/project management.

I admit it's hearsay, but yes, I've heard that some people supposedly conduct 1,000+ interviews.

To be fair, I'm sure some of those are the online "solve this coding puzzle in 30 minutes" type that can be watched later at 3x speed and don't require human interaction. I don't know the ratio, maybe the power interviewers are heavily sandbagging with those.


> I fear power interviewers lose the ability (or desire?) to assess candidates through their performance instead of strictly assess the performance.

This is an interesting point, because I feel like that's how large companies treat employee performance in general. I often hear stories of talented engineers being treated as cogs or passed over for promotions, such as the classic protobuf maintainer example.

Perhaps the assembly line fashion of interviews is reflective of how large bureaucracies treat employees as a whole.


I have a small set of interview questions I prefer to choose from, in part because I've given them quite a bit of thought.

I would be sad if I had to switch questions after 20 or 30 candidates because I find it necessary to invest substantial effort in calibrating a new question. Before I ask a candidate a new question, I try use it to mock interview at least 10 people I have worked closely with. Iteration is always required to tune the difficulty and complexity of a question, so the total time invested in a question can be quite high.

One way that I keep questions well-calibrated is that I use them to mock interview other members of the interview panel. This serves at least two purposes: one is to constantly remind myself what realistic answers sound like and another is the help the rest of the panel understand the areas my question will cover.

I find that — for me — this type of mock interviewing and the subsequent retrospective cultivate empathy for candidates. I think this avoids the sort of bias you're observing.

(NB: Some of this may be specific to the kinds of questions I ask; I care less about the initial answer a candidate gives than their ability to self-assess their answer or incorporate feedback to improve their solution.)


There's also the fact that after giving the interview question for a while, you know what are the issues that candidates struggle with and can figure out how to help them.

If you're interviewing correctly, your job as an interviewer is to ensure that the candidate succeeds during the interview. For example, one question might require to check if two intervals overlap. There are multiple ways to check if they do. One can enumerate all possible ways that they can do (first interval fully contained within the second one, second interval partially overlapping with the first one, etc.) but this quickly grows into a complicated conditional. At that point, if the candidate struggles, you can mention "how would you check that two intervals don't overlap at all?" which is a much easier test that can then be inverted by the candidate.

What the interviewers should be looking for is if the candidate can think through a problem, can split it in smaller steps, can solve each step and is able to integrate each small step into a complete whole.


> I find it necessary to invest substantial effort in calibrating a new question. Before I ask a candidate a new question, I try use it to mock interview at least 10 people I have worked closely with.

You could do that some process, with the interviewee.

It would be a great way to see how collaborative they are.

i.e. you say "I've never actually worked through this problem myself, so I don't know how deep the rabbit hole is, but let's give it a crack and see what we come up with together."

Sounds a lot more like real-life to me than a manufactured question you've already worked through to the nth degree.


I don't think this would give me sufficient data to compare candidates in an unbiased way.

Without establishing objective criteria with which to evaluate candidates, it's much easier to fall back on decision making processes that are prone to unconscious bias.

A manufactured question may seem impersonal, but that's the point.


I call this "Trebek Syndrome", after the long-time host of the "Jeopardy!" game show, Alex Trebek.

Sometimes, all the contestants will miss a prompt, and Alex Trebek comes across as a bit smug or incredulous when reading out the correct response. ("No one? No one knew this? The correct response is....") I may have occasionally yelled at my television, "You have been the host for decades! They have to give you all the answers in order to run the show!"

And that's what it is when you think you're smart just because you know most of the answers on "Jeopardy!" The game is different when you're watching it (or hosting it) than when you're actually playing it competitively.

The interview processes that are designed to stress or fluster a candidate can make the divide between host and players even worse. There is nothing the candidate can do to get the interviewer fired. Almost anything the interviewer does can result in a no-offer.


The biggest lie you're told as a candidate and in any "mock interview" video a big tech company puts out is that the interview is collaborative. "They aren't your interviewer, they're your co-worker!" Go watch Google's example technical interview:

https://www.youtube.com/watch?v=wwIysnVmAUg

Then go interview with them. They don't stand next and engage you while you're working at the white board. Instead, they sit at the opposite end of the table and type out your code line by line on their laptops so at the end they can see if whatever you scribbled actually compiles.

The laptop requirement just encourages disengagement. I knew the problem and how to get to the optimal solution for my first round at Google so it didn't affect my performance, but I was pretty disappointed because my interviewer, between typing up the lines I wrote on the board, was completely occupied answering email and Slack messages and barely said anything or made eye contact aside from the occasional "mhm" or nod. Had I gotten stuck at any point I don't think I would have been able to get any help, certainly not any help like I would get from a coworker in a truly collaborative scenario.


I cannot for the life of me figure out how people smart enough to work at a FAANG company would subject themselves to this kind of psychological dog and pony show. If I started a coding exercise in an interview, and the interviewer was sitting far away looking at a laptop screen and typing, I would literally just walk out of the building unprompted and go home. How are people able to twist their own self worth enough to actually feel _validated_ by that process? This seems like ritualistic fraternity hazing brought to bear on the professional workplace, with the stakes ratcheted up to 100.


>I cannot for the life of me figure out how people smart enough to work at a FAANG company would subject themselves to this kind of psychological dog and pony show.

$300k+ total comps are a big part of it


> If I started a coding exercise in an interview, and the interviewer was sitting far away looking at a laptop screen and typing

I would take a deep breath and go at it, thankful that this unfriendly person isn't breathing down my neck while I'm solving the problem.

Although that hardly constitutes a test of how someone will work in a collaborative environment.


To be fair, quite a lot of real-world "collaboration" consists of one person doing most of the work, and then sharing the credit (whether willing or not) with someone else. I am very accustomed to telling someone the hows and whys and not seeing any acknowledgement signals coming back.

Real collaborations have a lot of tangents, and rabbit trails, and clarification requests, and elucidation sidebars, and restating what has already been established, and re-asking the core questions to see if they have been answered. If the other person is just nodding and saying "Mmhmm. Go on.", then that's no "true [Scotsman]" collaborator. It's a typical collaboration, though; there are two people in the room, and between the two of them, someone is doing the work.


I also experienced a room filled with silence that was only haltingly broken by a few sharply pressed keys. However when I enthusiastically asked more questions and acted like I was having fun the interviewer opened up quite a bit more. The interviewer got up from across the table and helped clarify a point they were making on the white board. The conversation eventually got to the point where it felt something like that linked video, but boy did I have to dance for that.

My experience was that the video might show the ideal interview but not the default one. The default was tense silence and "okays" that barely restrained their judgement. The linked video seems to give an entertainingly wrong impression.


They aren't typing it to see if it compiles. They are typing it because somebody else makes the hire decision and they need a record of what happened in the interview.


> Had I gotten stuck at any point I don't think I would have been able to get any help, certainly not any help like I would get from a coworker in a truly collaborative scenario.

Or perhaps you got to find out exactly what it's like to work with that particular person.


Basically you're saying that as the interviewer keeps using the same question they increase how much they penalize people for not getting it?

I think one fix is to have an explicit criteria scoring a given question, and training interviewers to not rely too heavily on a single question.

I noticed I had this same issue, that I would give interviewees a pass if it was a sub-field I was unfamiliar with(javascript at the time) and more critical if it something I was really into.(leveraging the type system to prevent different classes of errors).


>"Finally, time ran out, mercifully ending both my and the candidate's misery."

This is so incredibly unprofessional I'm shocked that this large search and ad company wouldn't pull this interviewer out of the rotation.

I hope this was just a joke. If I received that in feedback for a candidate I'd replace that interviewer in a heartbeat.


> "Finally, time ran out, mercifully ending both my and the candidate's misery."

The fact that the organisation not only has people who are like this in a professional setting working there but also tolerates it doesn't reflect very well on the organisation. I understand it's difficult to control for the variation in personalities working at a large company but it's not too difficult to set a culture where a base level of professionalism is expected and is the norm. I'm not sure of how reviews work at your company but I sincerely hope comments like this officially flag the person as not suitable for management and possibly not suitable to continue in long term service at the company.


Arrogance is hip at many tech companies.


Another reason to rotate questions more frequently and maybe a reason interviewers raise their standards over time is because there are large forums out there dedicated to cheating and sharing solutions for any notable tech company. As more and more questions are shared more and more cheaters will perform optimally, raising the average performance for that question.

I could tell you which questions were asked at MTV this past week and by looking at the last month of these posts for a specific office it's incredibly easy to come up with the most frequently asked questions and memorize their optimal solutions. This means that if you're someone who doesn't partake in these communities that you'll enter these interviews at a significant disadvantage because your performance on that problem will be compared to everyone who was able to (legitimately or illegitimately) arrive at the optimal solution in 45 minutes.


Given the number of people giving interviews, the likelihood that you could gain much signal from those forums is limited.

Even the most common questions aren't aren't common enough that you're sure to encounter one on an interview loop.


At Google, questions are chosen out of a predetermined question pool. It doesn't matter how many interviewers there are. It's small enough that your interviewers leave a note on a piece of paper at the end of each round indicating which question they picked to ensure no two interviewers ask the same one. The question pool changes over time, but it rotates much slower thank you would think. Furthermore, even having an idea of the "theme" is a huge advantage given the variety of topics that can be asked.


I work at Google and conduct interviews there. There is no pool of questions. Interviewers are invited, but not required to share their questions with everyone else, but not all do, and there's no approval process for them. I don't know if what you're saying ever was true, but it certainly hasn't been for the past ~5 years or so.


I can only speak to one office so I shouldn't have assumed it was the case company wide. I only know two things:

1) At the office that my friend (who is also an interviewer) works, there is a pool that the interviewers collaboratively built and generally select from. They aren't required to choose from this pool, but 95% of them do because they contributed to it.

2) At MTV there's either a pool or the number of interviewers is so low that the same questions are repeated over and over at high frequency for several weeks, making it trivial to know which ones are likely to show up. I know because the questions I found on the sketchy Chinese forums were exactly the ones I encountered on my onsite interview.


As an alternative to 2, you just got lucky. There's lots of interviewers and lots of questions. My office is smaller than MTV, though not by much, and I don't think I've seen the same question come up more than a handful of times.

I'm unclear if 1 is referring to a specific office pool, which would be weird, or the global knowledge base I mentioned, but that has lots of questions, more than one could hope to memorize in any reasonable time frame.


If you don't have any pool, how do you track banned question? How do you know your question isn't already banned?


So, there's a pool insofar as, like I said, there's a thing you're invited to add your questions to. But you aren't required to ask questions from the pool (and I know many people who don't). And also the "pool" is huge, which is different from some other companies I've interviewed at, where interviewers are required to ask questions from a small (~10-15), preselected set.


Problem with your approach is in order to make sure your made up problem isn't banned you have to go through all the banned question everytime you ask your question. That's a lot of unnecessary toil.


Thank you for bringing up empathy. I think you are on the right track with it.

I would like to throw an experiment your way. As you mentioned, the interviewer, and interview questions itself can have its own bias. How about pre-screening questions? First, you would pool the possible set of interview questions. Then, once a month, a meaningful subset (or subsets) of 'regular' interviewers is created. Each subset gets one question from the pool and are required to complete it in 'interview like conditions' (closed room, no internet, just pen and paper). If the question has a poor answer rate with existing employees (aka interviewers), throw it out. It would be great if they explained what they did and did not like about the question and what would have made it easier. Would this process increase their empathy towards the interviewee? It needs to be frequent so it is a constant reminder.

Another, likely chaotic, option is to have the interviewer be randomly assigned a question from the pool. The question cannot have been their own submission. This way, both the interviewer and interviewee are seeing it for the first time and have to work toward a goal (team work).

Changing topics from interviewing to interview metrics... I would be interested in how interviewee - interviewer age differences affected outcomes.


A few years ago, I interviewed at such a company. I left humility and empathy in my feedback to said company. The level of contempt I felt from the interviewers left me traumatized. I'm not joking.

I've ignored every potential job ad I have seen from said search and ads company ever since.


It's possible, that you've got unlucky with a first interviewer, who had a bad day. And than bad mood reverberated. It happens. It's a noisy process. Don't judge them to harshly. (I don't work for said company.)


my interviewers were mostly mid-age, i.e. around my age, and we had pretty nice time, relatively easy questions/exercises, talked stuff about current and future products/techs, etc. - overall pretty typical interview, and i kind of was surprised what all the talk about their interviews was about. (too bad the offer was a very low comp L5 - that left me traumatized :). I guess you get what you pay for - i.e. i suppose one should pass their signature tough interview with the young hot-shots to get a good offer :)


>There are some other experiments I'd like to run.

Would it also make sense to have every harsh interviewer be interviewed by all the other harsh interviewers, and see how well they all perform on each others' question? In each case, incentivize the interviewer to stump the interviewee, but using only the same questions they use in real interviews with prospective candidates, and link the interviewee's performance to their bonus or something. Put the harsh interviewers in the same situation with a similar amount at stake as prospective candidates, remind them that nervousness and the unnatural setup relatively to typical working conditions can also be a factor in interview performance.


The adrenaline probably won’t start pumping unless you tell them they’ll be let go for poor performance.


Said well known search and ads company was by far my worst ever experience as a candidate. For a company that seems to put such a focus on recruiting it's really a remarkably dysfunctional process. I've heard so many horror stories from candidates who have been through it.


There are tactics that I think can be effective to bust those biases. One might be to put an upper limit on the number of times an interviewer is allowed to ask any given question.

I find it curious that you think this has to do with the particular question. This kind of bias comes up all the time, and it will certainly be in place even the first time you ask a particular question.

Simply growing in one's career and being around mostly other experienced people can quickly lead to one forgetting what it was like to be new to the field.

It's total lack of empathy at that point, and if the candidate doesn't exude near-perfect interviewing brilliance on that specific question, the interviewer judges them as essentially worthless.

That's going way too far. Having poorly calibrated standards doesn't imply lack of empathy. Having an unreasonably high bar doesn't mean you think anyone under the bar is worthless.

There are some other experiments I'd like to run. One of them is to have interviewers go through a one-hour interview themselves for every 50 or so interviews they give. Maybe match up an interviewer who has a track record of being especially harsh on candidates for not giving a flawless performance on the question they've been asking for a while. The idea is to see if we can't bubble up some empathy.

How about just recording data about interviews and using it as feedback to make the system better? If there are outlier interviewers with unreasonable interview-to-hire rates, deal with that situation directly.


Seeing the other side of the table has taught me to respect the randomness and arbitrariness of the process, and to take it a whole lot less personally. As a technical interviewer, you're asked one question: given the range of performances you've seen on the question you used, where does the candidate sit? Calibration, or familiarity with a wide range of performances, is what makes your answer credible.

The question is not "is this person good?" or "can they do the work?" The mandate is not "hire people who can do the work” and rejection is not "we don't think you can do the work.” The mandate is "hire people we're confident that we're excited about, even with the imperfect information we can afford." Like a competitive college, we're going to pass on a lot of applicants who are probably fine. It's not a reflection on their worth as people or as professionals. It's a pragmatic tradeoff in the design of a machine.

It behooves all parties to cast their nets far and wide, and to not get emotionally invested in any particular match before the offer stage.


That sounds impressive but the rest of us need to eat. During the “tech shortage.”


Do these interview practices really make it difficult to eat? AFAICT they make it difficult to work at prestige companies. Companies that are prestigious in the first place due to their selectivity and its trappings, i.e. if it were easy to get work there, no one would care about working there in particular.


Everywhere cargo-cults this fashion now, even online greeting card companies and govt jobs. They are all chasing the top % with market wages, no matter how rudimentary the work. Hence the shortage.


> "Finally, time ran out, mercifully ending both my and the candidate's misery."

My first dev job was at a place that had a whole team like this. They would get themselves psyched up for interviews and code reviews talking about making someone cry today.

It was pretty disturbing.

I'm glad most devs aren't like this, but sadly, there are still too many who are.


Is there any type of analysis done on how effective each interviewer is at gauging a good candidate? How effective a particular question is? Or even how effective each type of interview is (behavioral, algorithms, practical, systems design, etc)?

For example, let's say a previous candidate that was brought in as an L4, rose to L6 extremely quickly while constantly getting great performance reviews - maybe you could look back at the type of questions asked, and the type of feedback that candidate received.

Additionally, let's say a previous candidate ended up being a non-performer and was quickly let go, you could also look back at their initial interview.

Maybe everything could be fed into ML, and once you have a model in place you can start getting signals based off a candidate's replies to behavioral interview questions, or certain characteristics displayed during their systems design interview, etc.


FWIW it sounds like the interviewers where you work are asking quiz types of questions, rather than behavioral questions. Perhaps you're missing out on scores of good candidates because of that -- while possibly letting savant horses in.


Being re-interviewed is very important. Every interviewer should be re-interviewed periodically to see the other side and to calibrate the quality of your internal pool to external scores -- Why does nobody care about that? It seems equally critical if you believed in the merit of your process. Once every 50 interviews is too far apart. Once every 20 sounds about right. If they don't "pass", they should be made to re-interview again and again until they "pass", before they can interview others again. If you find all your interviewers are not available, you know you have a broken process.


I used to work at a valley big corp (social network) and interviewed a ton of candidates. My approach was as follows:

- Review candidate's resume and side projects to get to know more about them. This helped with getting to know their work and finding things to discuss beyond the interview exercise.

- Before diving into the exercise I would out right tell the candidate what I was looking for in regards to the exercise; e.g. "I'm not looking for a complete/perfect answer but I'm looking to have a conversation about the pros/cons and edge-cases. You can write on the whiteboard as little or as much as you want to but either way let's have a discussion. Let me know if you feel stuck at any point, and I'll make sure to let you know if you are/aren't on the right track. If you don't know/remember something just ask me; it' fine, nobody knows everything."

- I didn't put too much weight on whether the candidate gave a complete answer or not, or how much I had to help them. I basically asked myself a simple question: "Do I feel like this person would be a productive contributor here and are they someone I would be able to work with?"

- I always did my best to go into the room with a relaxed and conversational attitude. I was there to pass the candidate and not to fail them for random reasons.

- I passed most people and only failed some when it was somewhat obvious to me that they really lacked some very basic foundational pieces, or when I felt like they weren't someone I would want to work with (for various reasons).

Towards the end of my career there I started to really dislike interviewing because I would personally put so much effort to passing candidates, but they wouldn't get hired because other interviewers left feedback like "I helped the candidate too much" or "they didn't even know what a TRIE tree was" or "they struggled with X" or "the solution had bugs" or "the solution wasn't complete".

They interesting thing was that the same interviewers that failed candidates for seemingly random reasons, were also the ones who left confusing feedback or not enough feedback. Also the same interviewers either treated candidates poorly or they showed boredom and agitation.


You guys should do a test: randomly have a team of 10 or so people re-interview, without disclosing to anyone that they already work there. I would bet a good amount of money that at least half of them would fail the interview loop or get thrown out in committees, easily 2/3rds at Staff level and above where committee is less accommodating. Record it. Publicize it internally. This will be the best "humility" training imaginable.


I had a great go to coding exercise that I always ran with candidates but also worried that I was becoming biased in this way over time. All the issues look so obvious when you've seen them 100 times. I think sucking it up and developing new scenarios every six months is probably the solution at the end of the day but hard to prioritize amidst everything else when you've got something that works.


Did you do something about these terrible interviewers? Your company is particularly famous for a terrible interview experience because of interviewers such as this. I've personally experienced them a few times myself, and if you know they are bad, hopefully you've taken them off the interview rotation and reeducated them.


> hint: it's big and is really good at things like search

Care to share the name? The search engine I currently use is garbage, it routinely ignores half my query.


yeesh, this reinforces my lack of desire to ever interview there. thanks for the detailed writeup.


>>One of them is to have interviewers go through a one-hour interview themselves for every 50 or so interviews they give.

This would only be effective if the interviewee faces similar consequences for failing that interview, e.g. losing their job (the equivalent of an actual interviewer not getting an offer), or at the very least losing the ability to interview moving forward.


If Google forced people eligible for promotion to run the same interview hazing gauntlet as fresh candidates for that level, there's no doubt their interview process would align better with their job requirements.

They might also fix their comically bureaucratic promotion committee process at the same time. Seems like a win-win.


Nah, interviews are easier than promo committees. It's sometimes faster to leave, return and get hired at L+1 than to just get promoted there.


Or it would make the organization even more about hierarchies of domination.


its not bias, you just had assholes that shouldn't be interviewing people interviewing people. This probably explains why googles interviewing process is so toxic and dehumanizing.


is your class internal?


This just reads like the same kind of total lack of empathy you are criticizing in these merciless interviewers, just at the meta level of displaying empathy, which you seem sure you’re oh so much better at.

Maybe it’s time to step back and admit that Google-style interviewing is itself intrinsically toxic and unhuman? It brings out the worst in everyone, reduces candidates to rote memorization and tortured puzzle solving, and very sincerely does not reflect the realities of the job (not even at Google).


> It's total lack of empathy at that point...

> There are some other experiments I'd like to run... to see if we can't bubble up some empathy.

I don't know dude, it doesn't sound like you're the authority on empathy if you think you can "bubble up some empathy."

As far as making fun of the preposterous egotism of the Google interviewing process is concerned, that line could definitely make it to Silicon Valley in the parody.

I mean, I don't really know. If you aspire to be better at recruiting--of being a people person of some kind--you gotta not say stuff that sounds utterly, hilariously disconnected from what normal human beings say.


This is a great suggestion that closely aligns with my own thoughts and experiences with technical interviews.

I've had some recent experience with technical interviews, and my first big takeaway was that the current interview process is broken largely because it's too common for the interview to never surface the _strengths_ of a candidate, and instead only to highlight their weaknesses. For every candidate, there is literally an infinite number of things they do not know. Surfacing those deficiencies has no purpose or value unless those weaknesses are _directly_ relevant to the job role–which is hardly ever the case when talking about DS&A questions.

The second lesson I learned is that interviewers need more training because there is a vast difference between good and bad interviewers–and almost all of it comes down to communication skills. If we don't finish a warm-up problem because it takes me 30 minutes to decode and understand the question that the interviewer is trying to ask...that's a problem.


> we don't finish a warm-up problem because it takes me 30 minutes to decode and understand the question that the interviewer is trying to ask...that's a problem.

I get submarined by people who use pretty open-ended problems as their warmup question. Clearly they didn't think about how many corner cases a production version of the solution will take. I'm not in R&D very often. My job is to make production ready versions of solutions. I can't just turn off thinking about corner cases (and really, I have no interest in learning that habit because it just seems like a liability).

This train of thought goes through my head while I'm also trying to answer the question and ask follow on questions and then I end up wondering about the chops of the interviewer. Are they a corner cutter? They picked this person as a spokesman for the group. Is the whole group a bunch of hacks?


We hired eight contractors all at once. Two day marathon of interviews.

One candidate clearly thought she'd flunked the interview within ten seconds of when I decided to recommend her. She got stuck on the problem (totally wrong answer from the code due to a typo) and started to crumple but immediately went into the debugger to try figure it out. Ran through a series of perfectly reasonable diagnostics trying to zero in on the problem. I didn't even care if she found it at that point because I could see that she would get it eventually, and probably every one after that. You don't get to see the engineering discipline the same way if you use the whiteboard.

People who can solve their own problems can often help other people solve theirs. I don't want to add someone who is nice but needs my help all day. That might fluff my ego doesn't make us go faster.

I switched teams shortly thereafter (I was hiring my replacements) so I didn't get to work with her much, but I know she stayed on through the first contract renewal (not everybody did), so she must have worked out.


> You don't get to see the engineering discipline the same way if you use the whiteboard.

I don't think I agree. There's absolutely something to be said for the comfort of a familiar environment, but I think the interviewer should be able to emulate the compiler/debugger/runtime for the question they're asking. (Many of the most successful interviewees can do this themselves; they write down the program state on the whiteboard and step through it in their head.) Interviewers should be able to say "you get a SIGSEGV" and ask what the candidate would do. If the candidate says "I'd run gdb", they should be able to say "it says the crash was at this line", emulate break/print statements, and such. In some ways, it's slower than the candidate doing things, and more awkward to go through a human. In others, it's faster, because the interviewer can/should speed up the process by forgiving small syntax errors, saying "oh, you're bisecting? it's here", etc.

I do this sometimes when interviewing. I find though that the people who can successfully use a debugger (or me as a debugger) tend to have relatively minor errors in their code anyway[1]. It's pretty rare for someone to have a completely incorrect algorithm and figure that out from debugging.

[1] forgetting a guard on an if for an empty datastructure, forgetting to sort numerically instead of lexicographically, some dumb typo, etc.


> but I think the interviewer should be able to emulate the compiler/debugger/runtime for the question they're asking

How does a human emulate the UI of a debugger? It seems like you would have much lower information-bandwidth and thereby inevitably end up not really presenting all the info at once that a terminal window is able to.


(Sorry, I missed your reply day of, so I'm replying much later.)

Yes, you're right. It's a bit awkward. I certainly wouldn't want to do my regular work by whiteboard + dictation. And if I were designing my company's interview approach I might allow candidates to bring in a laptop to work on a toy problem in their chosen IDE.

But my point is that I don't think debugging skills are completely impossible to test in this way, and the forced interaction allows you to learn what piece of information they're looking for and why. If this is how you have to interview, you might as well find the best aspects of it and use them.

I think a lot of the key of getting useful signal from an interview is to ask a simpler problem. If you ask someone to write and debug really complex code in 45 minutes, you'll just find out if they can write code under pressure really fast. That's a great skill, but I care more about communication: being able to ask good requirements questions, describe the data structure/algorithm so that teammates will understand it, teach people how to work through problems, etc. I think overly complex coding takes away the time I spend examining those things. Likewise, there are a few criteria besides "coding" and "communication" which I also want time to focus on.

When I look through the notes from the full interview panel, my questions often seem to be simpler than others. My feedback appears to correlate more strongly with actual hiring than average, so I think it's persuasive. Of course, I don't know if it correlates well with how people would actually perform if we hired them. I don't even know if the people we did hire are performing well, because I work at a big company and the people I hire generally don't end up on my team.


Unfortunately, some insecure developers sign up to be interviewers precisely to feel superiority and power over interviewees. I've both worked with and been interviewed by some of these people - they seem particularly prominent in the valley. Anecdotally, Google was the worst of the big tech companies with cocky, condescending interviewers that came in wearing Ivy league sweaters. By contrast, all of my interviews at mid-level tech companies were equally difficult technically, but much more pleasant and engaging.


Some of the people who I would like most to interview at my company are opting out of interviewing because of their own self-doubt.


This seems like politics as well. I know a lot of people with well-formed, thoughtful opinions that I think would do a great job making laws, but have enough self-doubt/disinterest that it's never going to happen.


You know what would be fun and really engaging? Watching a subject-matter-expert and a skilled lawyer pair-program on writing the outline of some legislation.


Why would you interview someone you already worked with? What?


All interview material should be run through mock interviews with internal developers before asking anyone outside the company. This is the first step to preventing legitimately stupid questions from making its way into your process.


They would like those people to participate in interviewing candidates.


>>Unfortunately, some insecure developers sign up to be interviewers precisely to feel superiority and power over interviewees.

Had this experience recently. In the 2 hours or so interview. The interviewer bragged and boasted about himself for a good 1.5 hours. Out of 30 minutes of question/answer time I got he would routinely and very rudely interrupt me to tell in a humiliating tone that I was wrong.


Care to share the name of this company? I think the names of these companies should be made public for the general good of HN audience.

I know not everyone in that company would be humiliating like this interviewer but it tells something about the company if they hire such people and put them in the chair of an interviewer no less.


I mostly focus on consistency in this post (how to make a group of interviewers more consistent). Of course, what actually matters is accuracy (predictive utility of the interview). Obviously those are not the same thing (you could run 100% consistent interviews by just having everyone always grade “strong no”). However, I think (based on running the interview team at Triplebyte) that inconsistency is the primary obstacle to accuracy in practice. So I end up spending the majority of my time focusing on how to make interviewers more consistent.


In psychometrics, the field that studies the same thing you're trying to do, the concepts you're referring to as "consistency" and "accuracy" are known as "reliability" and "validity".

It's somewhat striking to me that you seem so worried about these concepts but you don't seem to be aware of the normal terms for them. How much does TripleByte try to inform itself of the existing research in this field? To what extent does TripleByte seek to incorporate psychometric results about what kinds of tests are likely to have high reliability and construct validity?

And one more more specific question:

> what actually matters is accuracy (predictive utility of the interview)

What is it that you're trying to predict? You could be trying to find employees who will be good employees, which would put TripleByte in the business of credentialing, or to find employees who will pass interviews at other companies, which would make TripleByte a recruiting agency. In the past, Harj has been explicit that what TripleByte wants to predict is whether a candidate will successfully pass the hiring process at another company, regardless of how well that hiring process performs. Is this still true?


I’m guessing they know the terms but ‘consistency’ and ‘accuracy’ are just more common easier to understand terms without sacrificing meaning. I’ve talked to some TripleByte engineers and been very impressed by the sophistication of the statistics and experimental methodology they use.


Using an experimental methodology isn't something you should be impressed by in itself. Experimenting means you don't know what you should be doing. That's great if no one knows what you should be doing and you're trying to figure it out; it's less great if everyone else knows what you should be doing, but you don't.

The psychometric literature is pretty robust.


One specific conversation I had was how they read the literature on the best adaptive testing systems and then developed improvements tailored to their specific data and the advantages they had as a real time online test.

Another was on specifically the psychometric literature, and the big meta-analyses of the predictiveness of different testing factors on job performance, and how that influenced the experiments they did early on and how they honed in on what they do today. As well as downsides they discovered of various methods people commonly suggest they're ignoring.

I came away from those conversations extremely impressed with TripleByte's employees and competence as an organization. They definitely think about this stuff.


Well, consistency and fairness might have the added benefit of goodwill towards the company from those folks don't necessarily get hired.


Were you able to measure the outcome of this process? I.e., was there a significant reduction in inconsistency, inaccuracy, or similarity bias after the exercise?


Ses ms reasonable, after all, precision isn't the same thing as accuracy but you can't get accuracy without precision.


Interesting idea:

> Then, as the interview progresses, do exactly this. About half the time give your best answer. The other half of the time give an intentionally poor answer. ...

> What this does is free your co-worker to be 100% honest. They don't know which parts of the interview were really you trying to perform well. Moreover, they are on the hook to notice the bad answers you gave. If you gave an intentionally poor answer and they don't “catch” it, they look a little bad. So, they will give an honest, detailed account of their perceptions.

This reminds me of the second part of the Rosenham experiment [ http://psychrights.org/articles/rosenham.htm ]:

> The following experiment was arranged at a research and teaching hospital whose staff had heard these findings but doubted that such an error could occur in their hospital. The staff was informed that at some time during the following three months, one or more pseudopatients would attempt to be admitted into the psychiatric hospital. Each staff member was asked to rate each patient who presented himself at admissions or on the ward according to the likelihood that the patient was a pseudopatient. A 10-point scale was used, with a 1 and 2 reflecting high confidence that the patient was a pseudopatient.

> Judgments were obtained on 193 patients who were admitted for psychiatric treatment. All staff who had had sustained contact with or primary responsibility for the patient – attendants, nurses, psychiatrists, physicians, and psychologists – were asked to make judgments. Forty-one patients were alleged, with high confidence, to be pseudopatients by at least one member of the staff. Twenty-three were considered suspect by at least one psychiatrist. Nineteen were suspected by one psychiatrist and one other staff member. Actually, no genuine pseudopatient (at least from my group) presented himself during this period.

There is a version of this exercise you could do where you say you are intentionally giving bad answers and give none!!!


> There is a version of this exercise you could do where you say you are intentionally giving bad answers and give none!!!

This might seem clever, but it could backfire. If the other person trusts you, they will take it as axiomatic that some of your answers are bad; therefore, if all of your answers are actually pretty good, they will desperately look for nits to pick, and possibly end up making criticisms that they don't really believe in (or at least wouldn't have believed in when unbiased). This can take you from one extreme (too polite/respectful/humble to be critical) to another (finding things to criticise no matter what), skipping the middle ground that you really want.


I'd be interested to see what happens if an interviewee turns the table under the guise of "interviewing the company," which is a concept promoted from time to time in interviewing threads.

When your interviewer is telling you about their role in the company and a little about their history (if they have one), say it's a dev interview because why not, ask them how they rate their programming skill on a scale from 1 to 5. Ask them why they left their last company. Ask them what the most difficult thing they've ever achieved is.


All joking aside, candidates do reject jobs and if you are interviewing for a job, you should be thinking "do I want to work here" .

When interviewing someone to be my manager I asked him since he spoke with such joy about his current job, why he was leaving... It was money. We found out the company he was at got funding a few days later, and although we made an offer he rejected us.

I've interviewed at a company where the environment didn't seem good at all. I always ask whats good about working at a place, and what isn't so good. I found people to be honest about it. I didn't think it was a good fit after thinking about it a bit a decided not to continue.

So when interviewing don't be arrogant. You need help, thats why you are looking for people. When interviewing candidates, my main criteria are:

- Can this person help us

- Can I deal with working with them


The interviewer usually makes some sort of excuse then gets right back to the actual reason they're in the room with you: leetcode 101.


You could ask the interviewer to whiteboard something for you, too.


I mean, sure, that’s a possibility. But some people are just really good at whiteboarding problems and explaining solutions. Sometimes these people end up being technical interviewers. If you’ve been a technical interviewer for a while, you don’t find the experience stressful. After a while of doing technical problems they start to seem easier and easier, even when presented with evidence to the contrary (when candidates find them difficult).

Speaking as someone who is (1) a technical interviewer (2) good at whiteboarding problems and (3) has near zero fear of public speaking, I think the big exercise in humility is to figure out ways to get evidence for the interviewee’s skill set even when it’s dissimilar to my own.


It would be weird, because we rarely whiteboard people.

We've found that the three best predictors for good hires are Curiosity, Ability to Self Learn, Ability to Listen.

Good hires will generate fill 2 or three.


This is great. I seriously doubt most interviewers could pass the technical tests they give out if they were incurring the huge cognitive load of interviewing for a company while trying to do it.

I'd love to see actual pair programming with interviewer and interviewee. Where they were assigned a random (small) code project and had to work on it together with neither having prior knowledge. It would level the playing field a bit and is much closer to actual working conditions than being forced to write code under duress and close real-time examination.


Pivotal does this. The interview is usually just a "day on the job".


I love that idea!


Very good points, agree with it.

However - I don't think this is about 'ego' or even 'humility' - I think those are not the right words.

It's a lack of contextual understanding both in 'self awareness' and also the interviewers plight.

I think the premise can be taught.

Also, I think interviews can be structured to find qualities independant of background.

+ Questions that don't measure a person's ability to 'memorize algorithms' are a good start.

+ Allowing devs to pick their language of expression, i.e. sometimes they are more comfortable in one lang than another.

+ Don't get syntax/code structure confused with the abstract problem if that's what one is going for. Google has a nice interview example [1]

+ Open ended questions with many possible turns allow for a 'good' thinker to just go a lot further, and be more impressive while at the same time allowing junior devs to still walk through and complete something. The Google example is again good here.

+ Time/on the spot - one of the worst issues. Personally I'm about 50/50. Sometimes 'in the flow' sometimes 'not', but surely just given a little bit fo time, I'd be fine on most things. For this reason, giving interviewees an intro to the problems, and giving them as much time as they want to think about them before the interview starts, might be worthwhile as well. 'Let us know when you want to go over a solution'. This could work well for pedantic things such as 'here's some code, find some bugs' or 'how would you structure this differently' etc..

[1] https://www.youtube.com/watch?v=XKu_SEDAykw


I went through one of TripleBytes interview processes. The interviewer was smug and condescending

Thankfully it was remote and not during my work hours, so little was lost.


Why not pick a problem the interviewer have not solved, and let the interviewee and interviewer solve the problem together. This shows how the candidate is thinking, and how they are working together with others. It will also let the interviewer get a more un-biased view of the difficulty of the algorithm or CS problem.


Give pointers if necessary in a technical interview, see what questions they ask. If they are completely flubbing it and you already know it is a no go, call it off. It's a little shocking to hear but in a few minutes they will generally be grateful/understanding you didn't waste two more hours of their time.


“What this does is free your co-worker to be 100% honest. They don't know which parts of the interview were really you trying to perform well.”

Since there was no mention of it in the post, this is called “randomized response,” and is a building block for modern privacy-preserving protocols e.g. RAPPOR, which is used in Google Chrome: https://security.googleblog.com/2014/10/learning-statistics-...


There are some factors that affect interviewee performance that are seldom considered. The stand-out I noticed is that the quality of whiteboard coding is a function of whiteboard size, the bigger the better.


Nice. I also feel like I need humility training to work with and evaluate newbie devs. They cant do anything right. Not sure if its just hard to find people who know what they're doing or I've forgotten what its like in your first few years.


I dont really agree with ego = 1/knowledge; I believe that my knowledge over time has been increasing and my ego has been fluctuating in a non correlate way.


Brilliant.


.


Don't. Work. For. FAANG. Money. Isn't. Everything.


[dead]


We said we'd ban you if you kept breaking the guidelines, so we have. We're happy to unban accounts if you email us at hn@ycombinator.com and we believe you'll post comments that readers can learn from.

https://news.ycombinator.com/newsguidelines.html


why interviewing is not automated yet


The technical part is, at several companies (including the one I work for).

A lot of companies now rely on tools such as Codility / Leetcode / HackerRank for technical screening, or their own in-house tests.


How would you determine if the person is a good fit culturally via an automated system?


have a human-to-human interview asking behavioral and background questions

and automate all the whiteboard/leetcode parts with boilerplate Q&A


Often times the most valuable signal comes from how the candidate got to an answer rather than the exact answer that the candidate got to.


when you apply to graduate school they don't ask you to solve calculus problems on a whiteboard to get the 'signal', the signal comes from 1) standardized tests 2) prior work 3) recommendations 4) behavioral interviews


So the entire purpose of the whiteboard interviews are to determine if the user got the answer, that's it? There's no other elements in play there?


this should be similar to GRE, SAT or any other standardized IQ test


How do you determine the difference between finding "a good fit culturally" and furthering a monoculture?


One approach is to define a set of values and communication habits that you believe you need people to be aligned on. That way, you allow for natural variation outside of that and you avoid giving your culture-fit interviewers a vague task.


How would you automate it?


replace human interviewers asking to flip a binary tree with a robot


So basically you're asking for a HackerRank screening. In my experience, if your code doesn't pass 100% of all test cases you get automatically rejected. At least with a human interviewer, they can evaluate your thought process even if the code itself isn't quite correct. Trust me you don't want to me interviewed only by robots.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: