> This also seems like a good job for machine learning eventually.
I'm not sure what "this" you mean (testing the doctors or testing the patients), but I'd argue the opposite. ML is great at performing tests we understand, and is terrible at determining things we aren't sure how to determine.
In this case, either you are checking to see if someone is truly disabled (which we don't have a good test for other than doctors, and see how that's working out) or you are checking to see if a doctor is being wrong in their evaluation, and if you can't perform the evaluation, figuring out if the doctor is skimping won't be easier.
And of course, even if you develop a perfect test, because of the incentives people will find a way to abuse it, and machines aren't likely to notice that they are now being played.
I'm not sure what "this" you mean (testing the doctors or testing the patients), but I'd argue the opposite. ML is great at performing tests we understand, and is terrible at determining things we aren't sure how to determine.
In this case, either you are checking to see if someone is truly disabled (which we don't have a good test for other than doctors, and see how that's working out) or you are checking to see if a doctor is being wrong in their evaluation, and if you can't perform the evaluation, figuring out if the doctor is skimping won't be easier.
And of course, even if you develop a perfect test, because of the incentives people will find a way to abuse it, and machines aren't likely to notice that they are now being played.