Humans have much more potential for racism in making predictions about the future than in judging things that have already happened. So while I agree that racism through biased input data is a problem, I think that even with that problem machines should be substantially less racist in their judgement than the humans they're replacing even if they're not perfect.