The paper is about 4 pages long - it takes about as long as it to you to write that comment as it does to skim through and learn that what you mentioned is exactly why they did the study:
> While our deep learning model was specifcally designed for the task of sex prediction, we emphasize that this
task has no inherent clinical utility. Instead, we aimed to demonstrate that AutoML could classify these images
independent of salient retinal features being known to domain experts, that is, retina specialists cannot readily
perform this task.
It always amazes me how people spend 5 seconds reading a headline but think they know more than someone who has spent days and months on the same topic.
Sorry I misinterpreted then. I thought you were dismissing it out of negativity but actually it's worse - you actually made a judgement that you knew more than the authors of the study.
The only judgement I made was to not read the whole paper. I read up until the paper stated that classifying sex based on retinal pictures was unlikely to be clinically useful. At which point I lost interest.
Why wasn't the ML model and clinician classifying something that actually is clinically useful?
If it has no clinical significance, what's the relevance of the classification of the clinicians?
How is it any more spectacular than beating a random classifier?
Had these points been addressed at this point I might have continued reading
Because I had already spent time reading and maybe someone could enlighten me as to why it in fact is interesting. That and I was also hoping to get insulted
I'd agree that if clinicians haven't been trained on this for their line work, then the comparison is not fair, but I wouldn't go so far as to say it's "useless".
No, you're right. But since there's a whole field on the subject I figured they could have chosen something with clinical utility and I don't really understand why they didn't