Hacker News new | past | comments | ask | show | jobs | submit login

Well, I have given you the benefit of the doubt as to your motives here, and I continue to do so, because I will also assume that you think you are making a good argument.

> What you're actually arguing is that inequality between ethnic/racial/gender groups is more important to eliminate than inequality between other groups.

What I am actually arguing is for dealing with bias wherever we find it, and against your position that this is invalidated by possible existence of other biases, no matter how small, and even though they continue to be hypothetical.

As I wrote in an earlier post, "if you have any actual evidence that the other groups you name are being measurably affected, make your data known, so we can make corrections in those cases as well." You have consistently failed to show any real-world evidence for your position. Ethics is primarily a matter of what people do in response to real-world situations, not some sort of hypothetical trolley problem from an intro. to philosophy course.

Your argument is tendentious in the way it arbitrarily takes certain fixed positions, such as the above insistence on not doing anything unless you can guarantee 100% success, and your insistence that maximizing the number of lives saved being the only justification for expenditure, only so far as it justifies (in your view) doing nothing in the case of ethnicity, race and gender, but no further.

It is also rather telling that you seem to think that a noticeable bias with regard to gender, of all things, would be some sort of corner case (not that ethnicity and race are small issues, globally, either.)

As for your excuses, in the other thread, for not showing evidence, well, evidence exists in the cases in question, so I will continue to regard your argument as a hypothetical one.




>>What I am actually arguing is for dealing with bias wherever we find it, and against your position that this is invalidated by possible existence of other biases, no matter how small, and even though they continue to be hypothetical.

The most efficient way to reduce all of what you call "biases", which are simply imperfections, is to reduce the total error rate. Prioritizing the error rate of one subset of the whole will be less efficient at reducing the total error rate, and that's what you do when you identify a disadvantaged group and change the development focus from reducing the total error rate to reducing the error rate for their subset.

>>You have consistently failed to show any real-world evidence for your position.

I've already addessed this argument. Repeating myself:

I can't show real world numbers when the phenomenon in question can't have controlled experiments run on it, and I don't need to show real world numbers to make a case for the logical soundness of a principle, in this case the principle that prioritizing improvement of a metric other than overall performance will generally lead to less overall performance improvements than not doing so.

>>Your argument is tendentious in the way it arbitrarily takes certain fixed positions, such as the above insistence on not doing anything unless you can guarantee 100% success,

What are you referring to?

>>and your insistence that maximizing the number of lives saved being the only justification for expenditure, only so far as it justifies (in your view) doing nothing in the case of ethnicity, race and gender, but no further.

Again I don't know what you're referring to. What do you mean when you say "only so far as it justifies [not focusing on reducing gender/racial disparities]"? You're alleging that my motivation for promoting the objective of maximizing the number of lives saved is to prevent action to reduce gender/racial disparities?


>> Your argument is tendentious in the way it arbitrarily takes certain fixed positions, such as the above insistence on not doing anything unless you can guarantee 100% success,

> What are you referring to?

It is rather surprising that you claim not to follow here, but here's just one example:

>> All ethical guidelines are, in some sense, arbitrary, but the one that says one should try to treat people equally is a pretty good one

> If the software has a less than 100% success rate, then it does not treat people equally.

As your quote is presented as a response to mine, the argument you are making here is that if the software has a less than 100% success rate, then trying to treat people equally is pointless.

> Again I don't know what you're referring to. What do you mean when you say "only so far as it justifies [not focusing on reducing gender/racial disparities]"?

Firstly, rewriting a quote is fraught with problems, especially when the actual words are just as easy to quote, and even more so as you had just quoted them accurately. In this case, they are "only so far as it justifies (in your view) doing nothing in the case of ethnicity, race and gender, but no further."

To give an example, you write "the objective should be to maximize the number of lives that are saved with the resources available", but your position only goes so far as to do nothing about this particular case, and has not been extended to its logical conclusion, which is to redirect all expenditure on facial recognition to more cost-effective lifesaving measures.

>> You have consistently failed to show any real-world evidence for your position.

> I've already addressed this argument. Repeating myself:...

Repeating yourself does not somehow nullify my response to its first appearance, which was to point out that there is good evidence for bias in the case of ethnicity, race and gender, but we are seeing no evidence whatsoever for the sort of confounding problems that you make up in your so-called "thought experiment". Your claim that you don't need to show evidence, because you have an argument in principle, leaves your position wide open to the criticisms of being unrealistic and pedantic.

> Prioritizing the error rate of one subset of the whole will be less efficient at reducing the total error rate.

Given the frequency that the law of diminishing returns is a factor, that is not nearly the given you think it is.

Let's consider some examples of how your point of view would play out. For example, there was recent fatal crash that revealed a corner case in Tesla's vision system, and other crashes that have revealed problems with Boeing's MCAS system. By your argument, it would necessarily be counterproductive to do anything that attempted to mitigate either of these issues specifically. I say 'necessarily' because if it were a contingent matter, then it would be inconsistent with your claim that you don't have to show evidence for your principle being realistic.


>>As your quote is presented as a response to mine, the argument you are making here is that if the software has a less than 100% success rate, then trying to treat people equally is pointless.

You misunderstand. I am saying that a less than 100% success rate is an indication of people being treated equally, by definition. Anyone for whom the software is not working has an unequal experience.

Reducing the error rate reduces inequality. Focusing on the error rate for the whole of the population is a more efficient way of reducing said inequality than focusing on reducing the error rate for a subset of the population.

>>To give an example, you write "the objective should be to maximize the number of lives that are saved with the resources available", but your position only goes so far as to do nothing about this particular case, and has not been extended to its logical conclusion, which is to redirect all expenditure on facial recognition to more cost-effective lifesaving measures.

The rest is implied. It goes without saying. Seeing it as conveying otherwise is an ungenerous, bad faith reading.

>>Repeating yourself does not somehow nullify my response to its first appearance,

I'm repeating a rebuttal to your point, which you have not responded to.

>>which was to point out that there is good evidence for bias in the case of ethnicity, race and gender, but we are seeing no evidence whatsoever for the sort of confounding problems that you make up in your so-called "thought experiment".

I've already addressed the logical shortcoming of your argument, repeatedly. You're simply ignoring the point and repeating what's been rebutted.

>>Let's consider some examples of how your point of view would play out. For example, there was recent fatal crash that revealed a corner case in Tesla's vision system, and other crashes that have revealed problems with Boeing's MCAS system.

Tesla/Boeing do not have a measurable rate of catastrophic error that can be reduced. These corner cases are the entirety of the measurable fatal errors found in the system. That's unlike facial recognition software, which has a measurable "catastrophic" (as catastrophic as errors in facial recognition software care be) error rate that can be reduced.


It is amusing to see you quote large parts of my previous post (while, as we shall see, skipping past some relevant context), and then fail to respond to the points in them. The claim that "I have already answered that" is, of course, often the last resort of the person who does not have an answer and does not want his claim examined further. It is not often used by people who actually did already answer (at least, not without quoting or referencing the specific relevant passage) because it looks so transparently evasive.

In addition, your response to the first quoted passage does not address the issue raised in its original context (which you left out of the quote.) I had no difficulty understanding your point that "a less than 100% success rate is an indication of people being treated [un]equally", but as it was presented, as a response to "one should try to treat people equally" [my emphasis here], it is formally a non-sequitur, but also clearly seems to be saying that anything less than 100% would mean that there is no justification for that policy.

I freely admit that I don't understand (but not in bad faith, which was itself a somewhat bad-faith allegation) your response to the second quoted passage: " The rest is implied. It goes without saying" -- the rest of what goes without saying? I am afraid it does not for me.

Similarly, I am confused by the statement "Tesla/Boeing do not have a measurable rate of catastrophic error that can be reduced. These corner cases are the entirety of the measurable fatal errors found in the system." As these are, you say, measurable fatal errors, then we would seem to have the data to calculate a rate of catastrophic error that has actually occurred, and if they are capable of mitigating the problems without making others worse, it would seem that the rate would go down. In fact, I would be extremely surprised if the word's aviation regulators do not want to see some plausible figures in that regard before allowing 737 MAXs to fly commercially. I don't want to be accused of bad faith again, so I will await your response before continuing this line of analysis further.

I am also confused by why having a measurable rate in the case of facial recognition makes it different with respect to your position, as, up to now, you have been claiming that your argument does not need real-world numbers. As, however, you are now apparently saying that these measurements are available, you will no doubt be able to show that your argument is neither hypothetical nor pedantic, by presenting real-world data.

Curiously there is one issue from my previous post that you did not mention at all in your reply: the confounding effect of the law of diminishing returns.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: