Hacker News new | past | comments | ask | show | jobs | submit login

> They claim that instead of a real-world accuracy, they wanted to find the “max” accuracy that their classifier was statistically capable of

Yeah, I read this on the GitHub issue a week ago and couldn't believe it. Ideally, their profile(1) should allow them to quickly admit they were wrong on such a simple issue. Pursuit of truth and knowledge, etc.

(1) a young PhD from a prestigious university

> For example, I think a neural network is capable of achieving a “max” accuracy of 100%

Why reach for such powerful tools? f(x) = random(num_classes), achieves 100% "upper bound" accuracy.




If I was confronted with this kind of nonsense in my data science job, I would lose all respect for the person who produced it and never thereafter trust anything they said without thoroughly vetting it.

There's only two options here, deceptive or hopelessly incompetent.


Ideally, their profile(1) should allow them to quickly admit they were wrong on such a simple issue.

Academia doesn't have a culture of admitting mistakes. Retracting a paper is typically seen as something shameful rather than progress (by scrutinizing results).

Combined with pressure to publish and sometimes limited engineering skills it leads to a volatile mix. There are a lot of published results that are not reproducible when you'd try (not saying anything new here, see replication crisis).


In its native environment, the scientists reserves its fiercest attacks for its competitors: by fighting tooth and nail, it can render its environment uninhabitable for nearly all but the most determined adversary. Sometimes, this ensures access to desirable mates, but not always.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: