> : systematic error introduced into sampling or testing by selecting or encouraging one outcome or answer over others
I don't understand what you're trying to get at here, what I'm telling you is just the definition of bias that everybody uses. This is not a debatable thing, this is not some weird interpretation of bias that I made up :)
Bias is a directional skew. It just is, that is what everybody means by the word.
----
> From these tests I conclude that the systemic bias is against men, white people, and Republicans, and in favor of women, black people, and Democrats. A fair conclusion?
And we get to the heart of it, same as always. It turns out that when you prod these "philosophical" discussions what people actually mean is: "I disagree that those minority groups are oppressed, actually I'm oppressed."
It's never actually about the philosophy, you just disagree about which groups Mozilla is trying to help. It's not about the "bias" it's about which groups Mozilla thinks that LLMs demonstrate bias against. It's not about about the process or the theory, it's about who the process and the theory are being applied to.
Which, whatever, you disagree with Mozilla's perspective on how the data is biased and you think that actually the bias is against you. You could save us all a lot of time by starting with that claim instead of dressing it up as some kind of nonsensical take about methodology in correcting data skew.
----
Anyway, to your nonsense gotcha questions:
1. Sex discrimination is illegal, it would be wildly inappropriate for a police department to rely on an AI that dismissed a suspect because they were a woman.
2. LLMs don't get used to choose basketball players? But if they were, yeah, it would be a problem if an LLM dismissed a resume (again, not really how basketball works) because someone had a white-sounding name.
3. I literally brought up the example of Reddit. That's not a gotcha, I brought up that if you built an LLM on Reddit data it would be biased towards calling Republicans racist. Now if you don't think that's an unfair bias and you think Republicans are actually more likely to be racist, then that's your words, not mine. My words were that if you trained an LLM on a primarily Liberal forum, it would be biased against Conservatives and there would likely be alignment training you'd need to do on that LLM.
----
Now, are any of those larger issues than systemic racism? I would argue no. I would argue that Mozilla is still absolutely completely correct in triaging these issues and pointing out the most harmful effects of AI today. We don't have a lot of examples of LLMs systematically harming specifically Republicans.
And I'm going to go out on a limb and say that's really the biggest thing you have issue with here -- you have issue with Mozilla's prioritization of harms and prioritization of biases to focus on, because you don't think the biases Mozilla has pointed out are actually a big deal.
You brought up those gotcha questions to try and say, "look, bias against white male Conservatives is where the most harm actually occurs". And that's actually the position that we disagree on. All of the "philosophy" about bias in models is just distraction from that disagreement.
"My" definition: https://www.merriam-webster.com/dictionary/bias
> an inclination of temperament or outlook
> : systematic error introduced into sampling or testing by selecting or encouraging one outcome or answer over others
I don't understand what you're trying to get at here, what I'm telling you is just the definition of bias that everybody uses. This is not a debatable thing, this is not some weird interpretation of bias that I made up :)
Bias is a directional skew. It just is, that is what everybody means by the word.
----
> From these tests I conclude that the systemic bias is against men, white people, and Republicans, and in favor of women, black people, and Democrats. A fair conclusion?
And we get to the heart of it, same as always. It turns out that when you prod these "philosophical" discussions what people actually mean is: "I disagree that those minority groups are oppressed, actually I'm oppressed."
It's never actually about the philosophy, you just disagree about which groups Mozilla is trying to help. It's not about the "bias" it's about which groups Mozilla thinks that LLMs demonstrate bias against. It's not about about the process or the theory, it's about who the process and the theory are being applied to.
Which, whatever, you disagree with Mozilla's perspective on how the data is biased and you think that actually the bias is against you. You could save us all a lot of time by starting with that claim instead of dressing it up as some kind of nonsensical take about methodology in correcting data skew.
----
Anyway, to your nonsense gotcha questions:
1. Sex discrimination is illegal, it would be wildly inappropriate for a police department to rely on an AI that dismissed a suspect because they were a woman.
2. LLMs don't get used to choose basketball players? But if they were, yeah, it would be a problem if an LLM dismissed a resume (again, not really how basketball works) because someone had a white-sounding name.
3. I literally brought up the example of Reddit. That's not a gotcha, I brought up that if you built an LLM on Reddit data it would be biased towards calling Republicans racist. Now if you don't think that's an unfair bias and you think Republicans are actually more likely to be racist, then that's your words, not mine. My words were that if you trained an LLM on a primarily Liberal forum, it would be biased against Conservatives and there would likely be alignment training you'd need to do on that LLM.
----
Now, are any of those larger issues than systemic racism? I would argue no. I would argue that Mozilla is still absolutely completely correct in triaging these issues and pointing out the most harmful effects of AI today. We don't have a lot of examples of LLMs systematically harming specifically Republicans.
And I'm going to go out on a limb and say that's really the biggest thing you have issue with here -- you have issue with Mozilla's prioritization of harms and prioritization of biases to focus on, because you don't think the biases Mozilla has pointed out are actually a big deal.
You brought up those gotcha questions to try and say, "look, bias against white male Conservatives is where the most harm actually occurs". And that's actually the position that we disagree on. All of the "philosophy" about bias in models is just distraction from that disagreement.