Presumably there's more to this than comes across in your comment.
After all, you don't avoid the unconscious bias of a single mind by adding more minds. That just gives you three sets of unconscious bias and adds biases caused by group dynamics.
Do you have a link? I may be googling the wrong terms.
You can still avoid most of the effects of a worst-case bias by adding two additional measurements.
Rather than 1 person, 30 minutes (one measurement), given 3 who all had the same experience, you are less likely to have all three have an impression unconnected with the substance of the conversation.
Each bias skews the same direction for each person, but not every bias is in the same direction for individual people. (Some people are biased in favor of Harvard/Ivy League graduates. Other people are biased against those exact same candidates. Bias is not by definition unidirectional for all people.)
The YC partners are trying to be similarly biased against entrepreneurs who (they believe) will not be successful in the program.
They are much less likely to be similarly biased against irrelevant factors like accents, mannerisms, backgrounds, etc.
> They are much less likely to be similarly biased against irrelevant factors like accents, mannerisms, backgrounds, etc.
They're not less biased, they just average out their biases over the group.
Your assumption is that three people chosen from a fairly homogenous pool are going to cancel out each others biases, which is... optimistic.
I don't know from this conversation what they're actually doing, but what they should be doing is using a diverse set of opinions to create a fixed set of questions and a fixed marking scheme, and then sticking to it for that round of interviews. Then looking back over time at every interview question and analysing how well it predicted later outcomes.
If you think they're sub-optimizing because of biases and a poor process, maybe that represents an opportunity for you or someone else to use your method to outcompete them.
Their track record suggests they're doing pretty well.
So your argument is that they should be above examination of their interview process because their investments are doing well? Come on, you're just arguing for the sake of it now.
Multiple independent assessments are great at reducing random noise. Bias is noise, sure, but it's by definition not random so you need other forms of intervention to counter it.
A huge part of it is resulting discussion that happens between different observers which puts the onus to check for biases and map the signals on objective parameters.
If I understand you correctly I think this is misleading.
Discussing candidates after an interview allows social dynamics within the group to distort the signal so you reduce the value of taking independent data points. Not only will it not reduce bias in the way you seem to suggest, but you'll also lose some of your ability to reduce random noise as the noise from more dominant interviewers will be amplified.
I don't have time to dig out citations, but a good starting point would be "What Works - Gender Equality By Design" by Iris Bohnet. She's one of the world's leading academics studying how biases are affected by different hiring techniques.
You seem to have made some (incorrect) assumptions based on very little text. Let me explain the process in somewhat more detail.
In my last company ($100B, publicly traded, extremely data driven), we interview candidates in a group of 2 (or more, but rarely) on clearly defined criteria to look for signals - in either direction.
During interview each of the interviewer looks for evidence to gather the signal - stronger the better and the purpose of the interview process is for all the interviewers to gather signals (preferably all criteria, preferably strong in either direction, but ofcourse bound by the realities of limited availability of time).
Once the interview process is over each interviewer jots down the signal strength and the related evidence on the scorecard independently and suggests the result of interview.
Later during a calibration, the signals and the evidences are presented to the interviewing peer group (recruiter, hiring managers, interviewers from other rounds), and pretty much disallows for any unconscious bias such as "I don't think Alice would be a good team lead (because she is a woman, and woman are not good managers), or "We should not hire Amit (because he is an Indian, and Indians write poor code").
Again the examples are too in-your-face, but unconscious bias is unconscious, and in the absence of having to defend your perspective to external parties with the support of evidences, which does not happen if there is only a single interviewer.
Think of it as the rubber duck for interview and biases, to keep your own unconscious bias as interviewer in check.
> Later during a calibration, the signals and the evidences are presented to the interviewing peer group (recruiter, hiring managers, interviewers from other rounds), and pretty much disallows for any unconscious bias such as "I don't think Alice would be a good team lead (because she is a woman, and woman are not good managers), or "We should not hire Amit (because he is an Indian, and Indians write poor code").
You've explained that your interview process has a predetermined scoring system which is a good start. I'm curious what the effect of this calibration stage is... did your company do predictivity and bias analysis on it?
Surprised they don't have 2 for 15 or 1 for 30 min.