Yeah, I fully agree. They should just admit the mistake rather than try to justify it. I was just trying to explain the incentive structure around them that encourages this behavior. Unfortunately no one gives you points for admitting your mistakes (in fact, you risk losing points) and you are unlikely to lose points for doubling down on an error.
> There is no way they could be at the stage they claim to be in their program (having just defended their thesis) and think the excuses they gave on GitHub are reasonable.
Unfortunately it is a very noisy process. I know people from top 3 universities that have good publication records and don't know probabilities from likelihoods. I know students and professors at these universities that think autocasting your model to fp16 reduces your memory by half (from fp32) and are confused when you explain that that's a theoretical (and not practical) lower bound. Just the other day I had someone open an issue on my github (who has a PhD from one of these universities and is currently a professor!) who was expecting me to teach them how to load a pretrained model. This is not uncommon.
> There is no way they could be at the stage they claim to be in their program (having just defended their thesis) and think the excuses they gave on GitHub are reasonable.
Unfortunately it is a very noisy process. I know people from top 3 universities that have good publication records and don't know probabilities from likelihoods. I know students and professors at these universities that think autocasting your model to fp16 reduces your memory by half (from fp32) and are confused when you explain that that's a theoretical (and not practical) lower bound. Just the other day I had someone open an issue on my github (who has a PhD from one of these universities and is currently a professor!) who was expecting me to teach them how to load a pretrained model. This is not uncommon.
Goodhart's Law is a bitch.