Yeah unfortunately it often ends up being a code for adjusting ML models to support certain world views or political biases.
It's too bad we haven't been able to separate the data science questions of how we feel about the training data, from the operational questions of whether (a) it's appropriate to make a determination algorithmically and (b) whether the specific model is suited to that decision. Instead we get vague statements about harms and biases.
It's too bad we haven't been able to separate the data science questions of how we feel about the training data, from the operational questions of whether (a) it's appropriate to make a determination algorithmically and (b) whether the specific model is suited to that decision. Instead we get vague statements about harms and biases.