Suppose you have a test which is a decent proxy for how well someone will do a job. The median person currently doing the job scored 85 and their range is 70-99. If you put someone who scored a 4 in the job, people will die almost immediately. If you put someone who scored a 50 there, people will be at a higher risk of death and you'd be better off passing on that candidate and waiting for a better one. From this we might come up with a threshold of 70 for the minimum score and call this "qualified". Then if you have to fill 5 slots and you got candidates scoring 50, 75 and 95, you should hire the latter two and keep the other slots unfilled until you get better candidates.
But if you have to fill 5 slots and you have 10 candidates who all scored above 70, you now have to choose between them somehow. And the candidates who scored 95 are legitimately expected to perform the job better than the ones who scored 75, even though the ones who scored 75 would have been better than an unfilled position.
According to the article they actually tested the first assumption and it was true.
The second assumption is not required. If people who score a 95 are only 5% better at the job than people who score a 70, all else equal you'd still pick the person who scored a 95 given the choice.
Non-linear doesn't mean "still monotonic". My experience has been that beyond a certain threshold on a given test, job performance is essentially uncorrelated with test performance.
As for the article, it's not given me particular solid vibes, a feeling not helped by some of the comments here (both pro and con).
Satisfying the first assumption means "still monotonic".
Also, if you had a better test then you'd use it, but at some point you have 10 candidates and 5 slots and have to use something to choose, so you use the closest approximation available until you can come up with a better one.
> Satisfying the first assumption means "still monotonic".
Sorry, but I just don't agree. There are "qualifying tests" for jobs that I've done that just do not have any sort of monotonic relationship with job performance. I'm a firefighter (volunteer) - to become operational you need to be certified as either FF I or FF II, but neither of those provide anything more than a "yes, this person can learn the basic stuff required to do this". The question of how good a firefighter someone will be is almost orthogonal to their performance on the certification exams. Someone who gets 95% on their IFSAC FF II exam is in no way predicted to be a better firefighter than someone who got 78%.
But if you have to fill 5 slots and you have 10 candidates who all scored above 70, you now have to choose between them somehow. And the candidates who scored 95 are legitimately expected to perform the job better than the ones who scored 75, even though the ones who scored 75 would have been better than an unfilled position.