Hacker News new | past | comments | ask | show | jobs | submit login

>>As your quote is presented as a response to mine, the argument you are making here is that if the software has a less than 100% success rate, then trying to treat people equally is pointless.

You misunderstand. I am saying that a less than 100% success rate is an indication of people being treated equally, by definition. Anyone for whom the software is not working has an unequal experience.

Reducing the error rate reduces inequality. Focusing on the error rate for the whole of the population is a more efficient way of reducing said inequality than focusing on reducing the error rate for a subset of the population.

>>To give an example, you write "the objective should be to maximize the number of lives that are saved with the resources available", but your position only goes so far as to do nothing about this particular case, and has not been extended to its logical conclusion, which is to redirect all expenditure on facial recognition to more cost-effective lifesaving measures.

The rest is implied. It goes without saying. Seeing it as conveying otherwise is an ungenerous, bad faith reading.

>>Repeating yourself does not somehow nullify my response to its first appearance,

I'm repeating a rebuttal to your point, which you have not responded to.

>>which was to point out that there is good evidence for bias in the case of ethnicity, race and gender, but we are seeing no evidence whatsoever for the sort of confounding problems that you make up in your so-called "thought experiment".

I've already addressed the logical shortcoming of your argument, repeatedly. You're simply ignoring the point and repeating what's been rebutted.

>>Let's consider some examples of how your point of view would play out. For example, there was recent fatal crash that revealed a corner case in Tesla's vision system, and other crashes that have revealed problems with Boeing's MCAS system.

Tesla/Boeing do not have a measurable rate of catastrophic error that can be reduced. These corner cases are the entirety of the measurable fatal errors found in the system. That's unlike facial recognition software, which has a measurable "catastrophic" (as catastrophic as errors in facial recognition software care be) error rate that can be reduced.




It is amusing to see you quote large parts of my previous post (while, as we shall see, skipping past some relevant context), and then fail to respond to the points in them. The claim that "I have already answered that" is, of course, often the last resort of the person who does not have an answer and does not want his claim examined further. It is not often used by people who actually did already answer (at least, not without quoting or referencing the specific relevant passage) because it looks so transparently evasive.

In addition, your response to the first quoted passage does not address the issue raised in its original context (which you left out of the quote.) I had no difficulty understanding your point that "a less than 100% success rate is an indication of people being treated [un]equally", but as it was presented, as a response to "one should try to treat people equally" [my emphasis here], it is formally a non-sequitur, but also clearly seems to be saying that anything less than 100% would mean that there is no justification for that policy.

I freely admit that I don't understand (but not in bad faith, which was itself a somewhat bad-faith allegation) your response to the second quoted passage: " The rest is implied. It goes without saying" -- the rest of what goes without saying? I am afraid it does not for me.

Similarly, I am confused by the statement "Tesla/Boeing do not have a measurable rate of catastrophic error that can be reduced. These corner cases are the entirety of the measurable fatal errors found in the system." As these are, you say, measurable fatal errors, then we would seem to have the data to calculate a rate of catastrophic error that has actually occurred, and if they are capable of mitigating the problems without making others worse, it would seem that the rate would go down. In fact, I would be extremely surprised if the word's aviation regulators do not want to see some plausible figures in that regard before allowing 737 MAXs to fly commercially. I don't want to be accused of bad faith again, so I will await your response before continuing this line of analysis further.

I am also confused by why having a measurable rate in the case of facial recognition makes it different with respect to your position, as, up to now, you have been claiming that your argument does not need real-world numbers. As, however, you are now apparently saying that these measurements are available, you will no doubt be able to show that your argument is neither hypothetical nor pedantic, by presenting real-world data.

Curiously there is one issue from my previous post that you did not mention at all in your reply: the confounding effect of the law of diminishing returns.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: