The obvious solution is to set such results among the short term goals and to treat failure to meet the criteria as unacceptable engineering failures.
As simple as no bonuses, no stock options, no promotions and most importantly no deployment if the model produces undesirable results.
The models reflect what AI ideology directs Amazon Turks to find. It reflects what is considered accurate and does not reflect classifications that might be perceived to put Amazon Mechanical Turk’s contracts at risk.
“We” didn’t make it that way because I know I am not using Amazon Mechanical Turk to classify images.
I mean even if the exploitive wage structure went away, the name itself is consistent with AI ideology that racism, religious intolerance, and nationalism are ok.
As simple as no bonuses, no stock options, no promotions and most importantly no deployment if the model produces undesirable results.
The models reflect what AI ideology directs Amazon Turks to find. It reflects what is considered accurate and does not reflect classifications that might be perceived to put Amazon Mechanical Turk’s contracts at risk.
“We” didn’t make it that way because I know I am not using Amazon Mechanical Turk to classify images.
I mean even if the exploitive wage structure went away, the name itself is consistent with AI ideology that racism, religious intolerance, and nationalism are ok.