> its pattern of strengths and weaknesses in code reviews is not the same as human patterns
I agree with you. Which is why I say that they complement. But your reply to Baron implies that what they were suggesting wasn't a solution. I agree with the sentiment of the post to do things in person. But what I take from Baron is that it is much harder to fake the process with GPT because the actual part of the code review isn't so much about finding the bugs, it is you watching someone perform the code review (presumably through screen sharing and a video chat). You could have this completely virtual, but I think you're right to imply that the same task could be then optimized.
But at the end of the day, I think the underlying issue is that we're testing the wrong things. If GPT can do sufficient, then what do we need the human for? Well... the actual coding, logic, and nuance. So we need to really see how a human performs in those domains. Your interviewing process should adapt with the times. It is like having a calculus exam where you test someone and ban calculators but also include a lot of rote, mundane, and arduous arithmetic calculations. That isn't testing the material that the course is on and isn't making anyone a better mathematician, because any mathematician in the wild will still use a calculator (mathematicians and physicists are often far more concerned with symbolic manipulation than numerals).
> other important factors in programming include things like knowing how to program, knowing how to listen, and being willing to ask for help (and accept it), which a board game generally will not test
I agree the game won't help with the first part. But I thought I didn't need to explicitly state that you should also require a resume and ask for a github if they have one. But I did explicitly say you should ask relevant questions. And I'm not entirely convinced on the latter, which are difficult skills to check for under any setting. There's a large set of collaborative games in which do require working and thinking as a team. I was really just throwing a game out there as a joke, being more a stand-in for an arbitrary setting.
At the end of the day, interviewing is a hard process and there are no clear cut solutions. But hey, we had this conversation literally a week ago: https://news.ycombinator.com/item?id=40291828
I agree with you. Which is why I say that they complement. But your reply to Baron implies that what they were suggesting wasn't a solution. I agree with the sentiment of the post to do things in person. But what I take from Baron is that it is much harder to fake the process with GPT because the actual part of the code review isn't so much about finding the bugs, it is you watching someone perform the code review (presumably through screen sharing and a video chat). You could have this completely virtual, but I think you're right to imply that the same task could be then optimized.
But at the end of the day, I think the underlying issue is that we're testing the wrong things. If GPT can do sufficient, then what do we need the human for? Well... the actual coding, logic, and nuance. So we need to really see how a human performs in those domains. Your interviewing process should adapt with the times. It is like having a calculus exam where you test someone and ban calculators but also include a lot of rote, mundane, and arduous arithmetic calculations. That isn't testing the material that the course is on and isn't making anyone a better mathematician, because any mathematician in the wild will still use a calculator (mathematicians and physicists are often far more concerned with symbolic manipulation than numerals).
> other important factors in programming include things like knowing how to program, knowing how to listen, and being willing to ask for help (and accept it), which a board game generally will not test
I agree the game won't help with the first part. But I thought I didn't need to explicitly state that you should also require a resume and ask for a github if they have one. But I did explicitly say you should ask relevant questions. And I'm not entirely convinced on the latter, which are difficult skills to check for under any setting. There's a large set of collaborative games in which do require working and thinking as a team. I was really just throwing a game out there as a joke, being more a stand-in for an arbitrary setting.
At the end of the day, interviewing is a hard process and there are no clear cut solutions. But hey, we had this conversation literally a week ago: https://news.ycombinator.com/item?id=40291828