Hacker News new | past | comments | ask | show | jobs | submit login

>Care to explain how a gradient is racist?

Sure. Your comment's language equivalent is something along the lines of "Care to explain how words are racist?" Which yes, they are just a collection of words. They possess no consciousness and cannot be racist by themselves.

Similarly, a gradient is just a collection of vectors. It's just numbers. However, like language, it's what they represent that matters.

For example, I can create a machine learning algorithm to determine who should get a home loan. I create a gradient to optimize the algorithm to give loans to people who I think are unqualified.

The gradient can easily be racist if it optimizes heavily on something like race. Minorities tend to be lower income and so can be seen as less qualified as higher income individuals. However that's the easy argument, and also quite illegal. If you exclude race, there's 2nd degree variables that are proxies for race. Things like zip codes, job titles, whether they rent or buy. These are not explicitly illegal to filter on, though the end result is illegal if they exclude certain protected statuses. It can even be no fault of the researchers who implement the algorithm, because controlling for bias using real world data is extremely difficult. But we must do it, since it is the ethical thing to do.

And so, it's easy to see that one can optimize ML algorithms to exclude certain protected statues, which is what can make the algorithms racist.




You failed the test. The Gradient is not biased, the data is. This was of course LeCun's point... This is pure foolishness


Maybe I'm not explaining it very well. Look, so things have meaning deeper than their face value. To use a really basic example, The number 14 means nothing, it's a number. The number 88 means nothing. In the same context, they mean something not good.

There are English words that as pieces, they don't mean anything except their face value. I can string words together that mean bad things that are harmful to real humans.

Gradients are not racist by themselves, they're just math. It's like saying multiplication is racist.

But I can use multiplication as a tool in a chain to create weighted averages to create a naive Bayesean classifier to reject people for home loans.

And so too can I misapply gradient descent as a part of a larger ML model that is racially biased. For instance, I could choose a loss function that when minimized, gives biased output despite less biased input. Or, I could accidentally settle on a local minimum on the gradient in my model. There's many naive implementations of an algorithm that will just be biased no matter the unbiased inputs.

So in summary, a gradient is just math and is not racist by itself. It's being used in an algorithmic tool chain that researchers are frequently using which potentially will always produce biased output no matter the inputs (but more often than not also with biased input).




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: