I get that questions like these are perhaps meant to be "more precise" about the specific tech in question, but honestly, they do more to display the naivete of the questioner in terms of understanding how this sort of thing plays out in real life, when it's not just tech folks, but other policymakers and stakeholders are involved and have the power to make decisions.
Even if the thing Apple is talking about right now can be distinguished from "facial recognition" on a technical level, it would be much MORE of a mistake to NOT lump them in together if we're trying to bring this debate to the general public, which we, of course, we should.
> it would be much MORE of a mistake to NOT lump them in together if we're trying to bring this debate to the general public, which we, of course, we should.
Frankly this suggests that you think it’s a good idea to mislead the general public. I think that is one of the ways we harm our public discourse. I could be missing a connection between the two that is obvious to you but not to me, in which case I apologize.
I am not naïve about facial recognition. I am not white, and I have known about ML technology since the 90s. Long before it became a known problem it was obvious to me that ML models would end up simply reflecting the biases of the corpus they were trained on, and this would lead to them embodying discrimination of one kind or another, some of which would not be obvious in advance. The perceived ‘neutrality’ of algorithms would be and in fact is invoked to minimize this problem, when of course the problem is not with the algorithms but with what people feed into them.
Please let me know if that doesn’t capture the problem with facial recognition adequately.
So, given that I’m not naïve about facial recognition, I ask again - what is the connection you see here between racially biased facial recognition and Apple’s CSAM countermeasures?
Not at all -- what I'm saying is that tech people end up misleading themselves in terms of likely outcomes by having these discussions and focusing on the hard discrete lines around this or that particular technology.
What very reliably happens is -- thhe people who make the decisions (who, unfortunately, are very rarely tech people) will lump them in anyway LATER, when it actually matters and is too late.
So your point is technically correct, and simultaneously absolutely does not capture the problem with facial recognition adequately, because it doesn't factor in "if you get people to sign off on Apples specific thing today, you'll basically be able to sign off on just about anything that sort of looks like it to the layperson" tomorrow.
Well the ‘tech people’ line is an ad-hominem. I don’t see anyone doing a great job of knowing the right move in terms of communicating about complex issue to the public. Aren’t you a tech person too?
Who do you see as ‘the people who make the decisions’ in this case? I.e. who do you imagine will lump these together? I could see the FBI saying ‘you did CSAM detection, so surely you can do ‘child predator detection via facial recognition’. Is that what you mean?
As for getting people to sign off on stuff - I think it’s not so obvious what we do and don’t want people to sign off on. Not implementing something like this now could easily mean they are forced to scan in the cloud, and that really would be a slippery slope.
It seems like you are saying even though this thing isn’t bad, we should persuade people into not signing off on it because we want to make sure they don’t later sign off on some facial recognition thing that actually is bad. Is that an accurate enough paraphrase?
No. I am saying "my, or anyone's evaluation of whether or not this thing today is bad is utterly meaningless, because history shows that law enforcement type powers literally never respect limits like these."
Again, your "tech knowledge" will not help you here. The lines you percieve between this "not-bad" thing and a future actually bad thing don't meaningfully exist.
Better to resort to simpler principles, go with the 4th amendment, slightly modified to include the tech companies. If the FBI wants to be in my stuff, they need a warrant, and that's it.
> Again, your "tech knowledge" will not help you here. The lines you percieve between this "not-bad" thing and a future actually bad thing don't meaningfully exist.
Are you saying you don’t understand the technology? That you don’t have knowledge about this subject?
> Better to resort to simpler principles, go with the 4th amendment, slightly modified to include the tech companies. If the FBI wants to be in my stuff, they need a warrant, and that's it.
If you don’t understand the technology how will you know whether it violates the 4th amendment or not?
No, I'm saying stop being a nerd. I do understand the technology just fine -- but that's entirely beside the point. A deep understanding of the technology is not necessary. A shallow understanding is sufficient.
Let's go back to a real case, Kyllo v. US. They used a thermal imaging camera to "look inside" a building, from the unusual heat signature, they correctly presumed a marijuana grow operation, and went in without a warrant.
Doesn't matter though. The court said they needed a warrant because people should be able to presume a level of privacy that the camera violated. Anything that one reasonably believes is private should be protected like that, regardless of whether you are using technology to "look" at it or not. The authorities observed a thing that a reasonable person would think of as private because they treated it that way.
Same applies here, even moreso, because Apple has previously guaranteed privacy, and already- this is not privacy on its face.
The only really important gap in knowledge is the one I mentioned before, that this does also end up in a slippery slope.
That seems like a pretty empty thing to say at the best of times. It’s not clear how it helps.
> I do understand the technology just fine -- but that's entirely beside the point.
Do you? That remains to be seen. A shallow understanding is only sufficient if it is correct and supports your other ideas.
As for Kyllo v US, that doesn’t obviously explain anything about lumping this in with biased facial rec. We seem to have moved away from that remark.
It’s unclear what you mean when you say Apple has ‘guaranteed’ privacy.
Do you mean, they have said they will only use this technology to check for CSAM, and they won’t use it for anything else?
If so, then I agree. Now that this has been publicized, and Apple has released detailed answers to the privacy concerns, people can reasonably expect it to be used for only the purpose of preventing CSAM being uploaded to iCloud, and not for anything else. That is a publicly documented committment and seems like would stand up well in court.
By your reasoning, there is no slippery slope, and this feature is exactly what Apple says it is, because they have made such public commitments. You say it isn’t privacy on its face but why do you think that?
If the system only reports already publicly known CSAM, and is well known to do so, and is part of an opt-in service, how is that a privacy violation?
If on the other hand you are claiming that Apple has made some other blanket ‘guarantee’ of privacy and this new feature contradicts that, I’d be curious to know what guarantee you are referring to.
It’s worth noting that once detected, CSAM must be reported by statute, and other cloud providers report tens of millions of images per year. I don’t know what the status of these reports are in the courts, of whether they have been tested.
Look, you're a huge sucker if you think that the boundaries of the tech and the stated policy today are 1) not fluid and 2) here's the bigger part -- aren't there primarily for the purpose of laying the groundwork for more intrusive spying. That's the "nerd" charge. If you follow the words they are saying and treat those as gospel and limiting, you're a nerd and a sucker.
And to take it further, if I seem paranoid or whatnot -- that's fine; it's better and smarter to be wrong in my direction than it is in the sucker direction, where you can't put the toothpaste back in the proverbial tube.
Is it? I think legal insight is relevant to what we are discussing.
> Look, you're a huge sucker if you think that the boundaries of the tech and the stated policy today are 1) not fluid
Who would think that?
> and 2) here's the bigger part -- aren't there primarily for the purpose of laying the groundwork for more intrusive spying.
It’s obvious that you think that is the agenda. Calling people who aren’t as convinced as you ‘suckers’ tells us you are sure of yourself, but not much else.
>> That's the "nerd" charge.
Sure, but it’s uninformative. It’s pretty obvious there are people who think what you think, so no news there.
> If you follow the words they are saying and treat those as gospel and limiting, you're a nerd and a sucker.
Agreed, but so what? If you treat the words as gospel you are a fool, but equally if you ignore them altogether you are simply ignorant.
Those aren’t the only options.
> And to take it further, if I seem paranoid or whatnot -- that's fine; it's better and smarter to be wrong in my direction than it is in the sucker direction,
I like this line of reasoning. I agree that it’s often good to take a precautionary position.
However in this case I just think the maximally paranoid position is weak, not just rhetorically, but effectively.
As for ‘intrusive spying’ being the primary purpose. That is an open question. Nobody is denying that law enforcement, and presumably intelligence agencies, want that, and will exploit what they can, in secret if they can get away with it. But they aren’t the only actors here. Is it Apple’s primary purpose? Is it NCMEC’s primary purpose?
That’s why understanding the technology matters.
> where you can't put the toothpaste back in the proverbial tube.
Now who is being a sucker about the boundaries not being fluid? The toothpaste in this case was out before the tube was ever invented.
Privacy technology is not binary, and always exists within a social context.
The state is always going to employ paranoid actors, and the public is right to be concerned about them. Law enforcement is always going to push for more power, and the public is right to want that power checked.
The rest of us, Apple included, operate in a complex and fluid environment. Defaulting to paranoia is like being a stopped clock. You’re right twice a day but you never know what time it really is.
Even if the thing Apple is talking about right now can be distinguished from "facial recognition" on a technical level, it would be much MORE of a mistake to NOT lump them in together if we're trying to bring this debate to the general public, which we, of course, we should.
(don't get me started on race..)