I've been maintaining a spare phone running lineage os exactly in case something like this happened - I love the apple watch and apple ecosystem, but this is such a flagrant abuse of their position as Maintainers Of The Device that I have no choice but to switch.
Fortunately, my email is on a paid provider (fastmail), and my photos are on a NAS, I've worked hard to get all of my friends on Signal. While I still use google maps, I've been trialing out OSM alternatives for a minute.
The things they've described are in general, reasonable and probably good in the moral sense. However, I'm not sure that I support what they are implementing for child accounts (as a queer kid, I was terrified of my parents finding out). On the surface, it seems good - but I am concerned about other snooping features that this portents.
However, with icloud photos csam, it is also a horrifying precedent that the device I put my life into is scanning my photos and reporting on bad behavior (even if the initial dataset is the most reprehensible behavior).
I'm saddened by Apple's decision, and I hope they recant, because it's the only way I will continue to use their platform.
> with icloud photos csam, it is also a horrifying precedent
I'm not so bugged by this. Uploading data to iCloud has always been a trade of convenience at the expense of privacy. Adding a client-side filter isn't great, but it's not categorically unprecedented--Apple executes search warrants against iCloud data--and can be turned off by turning off iCloud back-ups.
The scanning of childrens' iMessages, on the other hand, is a subversion of trust. Apple spent the last decade telling everyone their phones were secure. Creating this side channel opens up all kinds of problems. Having trouble as a controlling spouse? No problem--designate your partner as a child. Concerned your not-a-tech-whiz kid isn't adhering to your house's sexual mores? Solved. Bonus points if your kid's phone outs them as LGBT. To say nothing of most sexual abuse of minors happening at the hands of someone they trust. Will their phone, when they attempt to share evidence, tattle on them to their abuser?
Also, can't wait for Dads' photos of their kids landing them on a national kiddie porn watch list.
That's not how it works, unless you control your partner's Apple ID and you lie about their DOB when you create their account.
I created my kids Apple IDs when they were minors and enrolled them in Family Sharing. They are now both over 18 and I cannot just designate them as minors. Apple automatically removed my ability to control any aspects of their phones when they turned 18.
> Dads' photos of their kids landing them on a national kiddie porn watch list.
Indeed, false positives is much more worrying. The idea that my phone is spying on my pictures... like, what the hell.
I recently had a friend stay with me after being abused by their partner. The partner had paid for their phone and account and was using that control to spy on them. I wish that cyber security was taught in a more practical way because it has real world consequences. And like two comments on here and it’s now clear as day how this change could be used to perpetuate abuse. I’m not sure what the right solution is, but I wish there was a tech non profit that secured victims of abuse in their communication in an accessible way to non tech people.
Most people on this platform understand Cyber Sec and OpSec relatively well. The problem is you are concerned with the people not on a platform like this who require a good learning system and ways of making it interesting to actually retain and understand.
How do you prevent photo's from your kids ending up in such a database? Perhaps you mailed grandma a photo of a nude two year old during bath time during a Covid lockdown — you know, normal parenting stuff. Grandma posted it on Facebook (accidentally, naively, doesn't matter) or someone gained access to it, and it ended up on a seedy image board that caters to that niche. A year later and it's part of the big black box database of hashes and ping, a flag lights up next to your name on Apple's dashboard and local law enforcement is notified.
I don't know how most people feel about this, but even a false positive would seem hazardous. Does that put you on some permanent watch list in the lowest tier? How can you even know? And besides, it's all automated.
We could of course massively shift society towards a no-photo/video policy for our kids (perhaps only kept on a non-internet connected camera and hard drive), and tell grandma to just deal with it (come back after the lockdown granny, if you survive). Some people do.
And don't think that normal family photos won't get classified as CEI. What is titillating for one is another's harmless family photo.
This is implying all the concerns about possible future uses of this technology are unreasonable slippery slope concerns, but we're on our like fourth or fifth time on this slope and we've slipped down it every previous time, so it's not unreasonable to be concerned
Previous times down this slope:
* UK internet filters for child porn -> opt out filters for regular porn (ISPs now have a list of porn viewers) + mandatory filters for copyright infringment
* Google drive filters for illegal content -> Google driver filters for copyrighted content
* iCloud data is totally protected so it's ok to require an apple account -> iCloud in China run by government controlled data centers without encryption
* Protection against malware is important so Windows defender is mandatory unless you have a third party program -> Windows Defender deletes DeCSS
* Need to protect users against malware, so mobile devices are set up as walled gardens -> Providers use these walled gardens to prevent business models that are bad for them
The first slippery slope for this was when people made tools to do deep packet inspection and find copyrighted content during the Napster era.
That was the first sin of the internet era.
Discussing slippery slopes does nothing.
Edit: It is frustrating to see where we are going. However - conversations on HN tend to focus on the false positives, and not too much on the actual villains who are doing unspeakable things.
Perhaps people need to hear stories from case workers or people actually dealing with the other side of the coin to better make a call on where the line should be drawn.
I don't think anyone here is trying to detract from the horrors and the crimes.
My problem is that these lists have already been used for retaliation against valid criticism. Scope creep is real, and in case of this particular list, adding an item is an explicit, global accusation of the creator and/or distributor for being a child molester.
My statement was to clarify incorrect statements of the issue. Someone was worried about incorrect DoBs entered by jilted lovers would get people flagged.
I just outlined what the actual process is. I feel that discussing the actual problem leads to better solutions and discussions.
Since this topic attracts strong viewpoints, I was as brief as possible to reduce any potential target area, and even left a line supporting the slippery slope argument.
If this was not conveyed, please let me know.
Matter of fact, your response pointing out the false positive issues is a win in my book! Its better than what the parent discussion was about.
But what I am truly perplexed by, is when you talk about "firs they came..." and "we're not biting".
Who is we, and why wouldn't YOU agree with a position supporting a slippery slope argument?
You seem to disagree with the actions being telegraphed by Apple.
This isn't a question about condoning child abuse. It's a question of doing probabilistic detection of someone possessing "objectionable content". Not sharing, not storing - possessing. This system, once deployed, will be used for other purposes. Just look at the history of every other technology supposedly built to combat CP. They all have expanded in scope.
Trying to frame the question along the usual slippery slope arguments implicitly sets up anyone critisicing the mechanism as a supporter of fundamentally objectionable content.
Sure, and i have no objection to what you are saying.
This thread however was where I was making a separate point that helps this discussion by removing confusion or assumptions on how Apple’s proposal works.
Sorry about the really long delay with answer, the week got better of me.
Your original post posited a reasonable question, but I felt the details were somewhat muddled. The reason I reacted and answered was that I have seen this style of questioning elsewhere before. The way you finished off was actually a little alarming: it'd be really easy to drop in with a followup that in turn would look like the other person was trying to defend the indefensible.
With my original reply I attempted to defuse that potential. The issue is incendiary enough without people willingly misunderstanding each other.
If you already control their apple account, then you already have access to this information. Your threat model can’t be “the user is already pwned” because then everything is vulnerable, always
The real problem here is that the user can't un-pwn the device, because it's the corporation that has root instead of the user.
To do a factory reset or otherwise get it back into a state where the spyware the abuser installed is not present, the manufacturer requires the authorization of the abuser. If you can't root your own device then you can't e.g. spoof the spyware and have it report what you want it to report instead of what you're actually doing.
i wish I could downvote this a million times. if someone has to seize physical control of your phone to see sexts thats one thing. this informs the abuser whenever the sext is sent/received. this feature will lead to violent beatings of victims who share a residence with their abuser. Consider the scenario of sally sexting jim while tom sits in another room of the same home waiting for the text to set him off. in other circumstances, sally would be able to delete her texts, now violent tom will know immediately. Apple has just removed the protection of deleting texts from victims of same residence abusers.
Apple should be ashamed. I see this as Apple paying the tax of doing business in many of the worlds most lucrative markets. Apple has developed this feature to gain access to markets that require this level of surveillance if their citizens.
> Consider the scenario of sally sexting jim while tom sits in another room
Consider Sally sending a picture of a bee that Apple’s algo determines with 100% confidence is a breast while Tom sits in another room. One could iterate ad infinitum.
Parent police the phones of 16 and 17 years old? That's some horrifying over parenting, Britney spears conservatorship level madness. Those kids have no hope in the real world
Well, as a parent, I can tell you that some 16/17 year olds are responsible and worthy of the trust that comes with full independence. Others have more social/mental maturing to do yet and need some extra guidance. That's just how it goes.
When you write that out, the idea of getting 'Apple ID's for your kids doesn't sound that great.
Register your kids with a corporate behemoth! Why not!? Get them hooked on Apple right from childhood, get their entire life in iCloud, and see if they'll ever break out of the walled garden.
This is an argument for me to not start using iCloud keychain. If Apple flags my account, I don't want to lose access to literally all my other accounts.
The “child” would be alerted and given a chance to not send the objectionable content prior to alerting anyone else. Did you read how it work?
Also, a father would only land in a national registry of their child’s photos are known to be CSAM. Simply taking a photo of your child wouldn’t trigger it.
> That's not how it works, unless you control your partner's Apple ID and you lie about their DOB when you create their account.
The most annoying thing about Apple Family sharing is that in order to create accounts for people you must specify that they are under 13 (source: https://www.apple.com/lae/family-sharing) - otherwise the only other option is for your "family member" to link their account to the Apple Family which is under your purview, which understandably many people might be hesitant to do because of privacy concerns (as opposed to logging into the child account on a Windows computer exclusively to listen to Apple Music - which doesn't tie the entire machine to that Apple ID as long as it's not a mac).
And so in my case, I have zero actual family members in my Apple Family (they're more interested in my Netflix family account). It begs the question, why does Apple insist on having people be family members in order to share Apple Music? We have five slots to share, and they get our money either way. They also don't let you remove family members - which may be the original intent for insisting on such a ridiculous thing - as if they're trying to take the moral high ground and guilt trip us for disowning a family member when in fact it simply benefits them when a fallout occurs between non-family members, because there's a good chance that the person in question will stop using the service due to privacy concerns, and that's less traffic for Apple.
It's actually kind of humorous to think that I still have my ex-ex-ex-girlfriend in my Apple Family account, and according to Apple she's 11 now (in reality, she's in her 30s). I can't remove her until another 7 years pass (and even then it’s questionable if they’ll allow it, because they might insist that I can’t divorce my “children”). And honestly, at this point I wouldn’t even remove her if I could, she has a newborn baby and a partner now, and I’m happy to provide that account, and I still have two unused slots to give away. I’ve never been the type of person who has a lot of friends, I have a few friends, and one girlfriend at a time. But the thing is she’s never been a music person and I assume that she isn’t even using it - and so even if I made a new best friend or two and reached out to her to let her know that I wanted to add them, Apple currently wouldn’t let me remove her to make room for those theoretical friends. While I'm a big fan of Apple hardware, it really bothers me that a group of sleazy people sat around a table trying to figure out how to maximize income and minimize network traffic, and this is what they came up with.
Did you ever stop to realize if licensing has anything to do with this? You also lied about someones age when creating their Apple account and continue to provide access to someone outside your family. Call them, and remove them, and then likely they'll ban you for violating the ToS
Curious, how would licensing affect this? Would the assumption be that everyone resides under the same roof? Because that's not a requirement for being in a family.
No, generally to license content you pay a fee per screen in this case as well as a fee per viewer. In the case of families this is calculated by the amount of people in the account. They don’t charge your costs per person, they charge a flat rate based on the maximum number of people you can add to your account. So doing it this way they’re not circumventing the per device fee they are charged that you’re trying to get them to pay for you for free.
> They don’t charge your costs per person, they charge a flat rate based on the maximum number of people you can add to your account. So doing it this way they’re not circumventing the per device fee they are charged that you’re trying to get them to pay for you for free.
I'm confused, how am I trying to get them to provide anything for free? I pay for the service, and that service has a limited number of account slots, and the people using those slots have their own devices. What am I missing?
Are you under the assumption that child accounts don't occupy a slot, and are free-riding? If so, that's not the case. Child accounts occupy a slot all the same, the only difference is that by providing child accounts to my adult friends, they aren't required to link their existing Apple accounts to the service that's under my control.
Moving the scanning to the client side is clearly an attempt to move towards scanning content which is about to be posted on encrypted services, otherwise they could do it on the server-side, which is "not categorically unprecedented".
Even open source operating systems have closed source components, and unless you're in charge of the entire distribution chain you can't be source used to compile it was the same that was shared with you. On top of that most devices have proprietary systems inside of their hardware that the OS can't control.
So it would be better to say "this has been a risk by using modern technology".
"Things are currently bad, therefore give up" isn't satisfactory.
Even if everything is imperfect, some things are more imperfect than others. If each component that makes you vulnerable has a given percent chance of being used against you in practice, you're better off with one than six, even if you're better off with none than one.
Just to add to this: trying to compile an open source Android distro is a tricky proposition that requires trusting several binaries and a huge source tree.
Moreover, having a personal route to digital autonomy is nearly worthless. To protect democracy and freedom, practically all users need to be able to compute securely.
There's a classic Ken Thompson talk about Trust where he shows how a compiler could essentially propagate a bug forward even after the source code for that compiler was cleaned up.
It's a concept that relies on a great deal of magic to function properly. The binary-only compiler we have must insert code to propagate the infection. To do so it must know it is compiling a compiler, and understand precisely how to affect code generation, without breaking anything. That... feels like an undecidable problem to me.
sure it's undecidable, you can reduce a decider for the word problem to that pretty easily, but in practice you probably only have to recognize a few common compilers to do a good enough job
But the viral property that makes this attack so insidious is, in the end, not sufficiently viral: the compiler will eventually drift away (as a consequence of it being developed further, of course) from the patterns it recognizes. The attack is not as dangerous as it sounds.
I'm not really arguing that too much, since it's a pretty elaborate and finnicky thing ultimately, although I would wager to say that you could probably find patterns to recognize and modify clang/gcc/other relatively stable compilers for a long time to come (assuming no active mitigations against it)
On most hardware, the closed source components are optional.
For example the only closed source component that I use is the NVIDIA driver, but I could use Nouveau, with lower performance.
The real problem is caused by the hardware backdoors that cannot be controlled by the operating systems and which prevent the full ownership of the devices, e.g. the System Management Mode of Intel/AMD, the Intel ME and the AMD PSP.
On most hardware, the closed source components come pre-installed. Just walk into any store and find a phone with a verifiable open source distro.
The real problem is that most people don't have the knowledge to compile a distro and load it onto a phone. Most people don't even know that's a possibility or that the distro on their phone isn't open source.
Photosync can automatically move photos from iDevices into consolidated NAS, SFTP, cloud or iXpand USB storage, https://photosync-app.com
GoodReader has optional app-level file encryption with a password that is not stored in the iOS keychain. In theory, those encrypted files should be opaque to device backups or local filesystem scanning, unless iOS or malware harvests the key from runtime memory, https://goodreader.com/
If The Verge's article is accurate about how/when the CSAM scanning occurs then I don't have a problem with that, sounds like they're moving the scanning from server to client side, the concerns about false positives seem valid to me but I'm not sure the chance of one occurring has increased over the existing icloud scanning. Scope creep for other content scanning is definitely a possibility though so I hope people keep an eye on that
I'm not a parent but the other child protection features seem like they could definitely be abused by some parents to exert control/pry into their kids private lives. It's a shame that systems have to be designed to prevent abuse by bad people but at Apple's scale it seems like they should have better answers for the concerns being raised
The CSAM scanning is still troubling because it implies your own device is running software against your own self-interest. If Apple wanted to get out of legal trouble by not hosting illegal content but still make sure iOS is working in the best legal interest of the phone's user, they'd prevent the upload of the tagged pictures and notify that they refuse to host these particular files. Right now, it seems like the phone will actively be snitching on its owner. I somehow don't have the same problem with them running the scan on their servers since it's machines they own but having the owner's own property work against them sets a bad precedent.
It would be easy to extend this to scan for 'wrongthink'.
Next logical steps would be to scan for: confidential government documents, piracy, sensitive items, porn in some countries, LGBT content in countries where it's illegal, etc... (and not just on icloud backed up files, everything)
This could come either via Apple selling this as a product or forced by governments...
I'd guess more like 6 months, but I agree that it will be trivial for the CCP to make them fall in line by threatening to kick them out of the market. Although... maybe they already have this ability in China.
> It's a shame that systems have to be designed to prevent abuse by bad people but at Apple's scale it seems like they should have better answers for the concerns being raised.
Well the obvious response is that these systems don't have to be designed. Child abuse is a convenient red herring to expand surveillance capabilities. Anyone opposing the capability is branded a child molester. This is the oldest trick in the book.
I mean the capability to spy on your kid can easily be used to abuse them. Apple could very well end up making children's lives worse.
Luca isn't an Apple app is it? And I thought the system Apple developed with Google had much better privacy guarantees? Although I don't think it was ever actually deployed.
> sounds like they're moving the scanning from server to client side, the concerns about false positives seem valid to me but I'm not sure the chance of one occurring has increased over the existing icloud scanning.
That's the irony in this: This move arguably improves privacy by removing the requirement that images be decrypted on the server to run a check against the NCMEC database. While iCloud Photo Library is of course not E2E, in theory images should no longer have to be decrypted anywhere other than on the client under normal circumstances.
And yet – by moving the check to the client, something that was once a clear distinction has been blurred. I entirely understand (and share) the discomfort around what is essentially a surveillance technology now running on hardware I own rather than on a server I connect to, even if it's the exact same software doing the exact same thing.
Objectively, I see the advantage to Apple's client-side approach. Subjectively, I'm not so sure.
If Apple has the ability to decrypt my photos on their servers, why do I care whether or not they actually do so today? Either way, the government could hand them a FISA warrant for them all tomorrow.
That’s an interesting way to look at it. Funny how this news can be interpreted as both signaling Apple’s interest in E2EE iCloud Photos or weaking their overall privacy stance.
My issue with my own statement is that we have yet to see plans for E2EE Photos with this in place - if apple had laid out this as their intention on apple.com/child-safety/ it would have been clear-cut.
Yeah, but adding whatever document/meme/etc that will represent the group you hate and boom you have a way to identify operatives of a politician party.
eg. Anyone with a "Feel the Bern" marketing material -> arrest them under suspicion of CSAM. Search their device and find them as dissidents.
The reality is that actual child abusers know who they are. They realize that society is after them. They are already paranoid, secretive, people. They are not going to be uploading pictures to the cloud of their child abuse.
And let's not forget the minor detail that this is now public knowledge. It's like telling your teenage son you're going to be searching his closet for marijuana in the future.
This is way too much work to gain hardly anything. It's just as easy to just log into another device with their iCloud password and literally read everything they send. Less work, more result.
First, machine learning to detect potentially inappropriate pictures for children to view. This seems to require parental controls to be on. Optionally it can send a message to the parent when a child purposefully views the image. The image itself is not shared with Apple so this is notification to parents only.
The second part is a list of hashes. So the Photos app will hash images and compare to the list in the database. If it matches then presumably they do something about that. The database is only a list of KNOWN child abuse images circulating.
Now, not to say I like the second part but the first one seems fine. The second is sketchy in that what happens if there’s a hash collision. But either way it seems easy enough to clear that one up.
No father is going to be added to some list for their children’s photos. Stop with that hyperbole.
This is Apple installing code on their users' devices with the express intent to harm their customers. That's it! This is inarguable! If this system works as intended, Apple is knowingly selling devices that will harm their customers. We can have the argument as to whether the harm is justified, whether the users deserved it. Sure, this only impacts child molesters. That makes it ok?
"But it only impacts iCloud Photos". Valid! So why not run the scanner in iCloud and not on MY PHONE that I paid OVER A THOUSAND DOLLARS for? Because of end-to-end encryption. Apple wants to have their cake and eat it too. They can say they have E2EE, but also give users no way to opt-out of code, running on 100% of the "end" devices in that "end-to-end encryption" system, which subverts the E2EE. A beautiful little system they've created. "E2EE" means different things on Apple devices, for sure!
And you're ignoring (or didn't read) the central, valid point of the EFF article: Maybe you can justify this in the US. Most countries are far, far worse than the US when it comes to privacy and human rights. The technology exists. The policy has been drafted and enacted; Apple is now alright with subverting E2EE. We start with hashes of images of child exploitation. What's next? Tank man in China? Photos of naked adult women, in conservative parts of the world? A meme criticizing your country's leader? I want to believe that Apple will, AT LEAST, stop at child exploitation, but Apple has already estroyed the faith I held in them, only yesterday, in their fight for privacy as a right.
This isn't an issue you can hold a middleground position on. Encryption doesn't only kinda-sorta work in a half-ass implementation; it doesn't work at all.
> So the Photos app will hash images and compare to the list in the database.
I am wondering what hashes are now and will be in this database. Or combine Pegasus exploit , put a few bad images on the journalist/politician iPhone, cleanup the tracks and wait for Apple and FBI destroy the person.
I kept the lineage phone in my back pocket, confident that it would be a good 4-5 years before they shipped something that violated their claims. I figured, the alternatives would be stable and widespread.
> with the express intent to harm their customers.
This of course gets into 'what even is harm?' since that's a very subjective way of classifying something, especially when you try to do it on behalf of others.
For CSAM you could probably assume that "everyone this code takes action against would consider doing so harmful", but _consequences in general are harmful_ and thus you could make this same argument about anything that tries to prevent crime or catch criminals instead of simply waiting for people to turn themselves in. You harm a burglar when you call for emergency services to apprehend them.
> This isn't an issue you can hold a middleground position on. Encryption doesn't only kinda-sorta work in a half-ass implementation; it doesn't work at all.
This is the exact issue that the U.S. has been entrenched by - thinking that you can't disagree with one thing someone says or does and agree with other things they say or do. You can support Apple deciding to combat CSAM. You can not support Apple for trying to do this client-sided instead of server-sided. You can also support Apple for taking steps towards bringing E2EE to iCloud Photos. You can also not support them bowing to the CCP and giving up Chinese citizens' iCloud data encryption keys to the CCP. This is a middle ground - and just because you financially support Apple by buying an iPhone or in-app purchases doesn't mean you suddenly agree with everything they do. This isn't a new phenomenon - before the internet, we just didn't have the capacity to know, in an instant, the bad parts of the people or companies we interfaced with.
You do harm a burglar when you call for emergency services; but the burglar doesn't pay for your security system. And more accurately: an innocent man pays for his neighbor's security system, which has a "one in a trillion" chance of accusing the innocent man of breaking in, totally randomly and without any evidence. Of course, the chances are slim, and he would never be charged with breaking in if it did happen, but would you still take that deal?
I've seen the "right to unreasonable search and seizure" Americans hold quoted a bit during this discussion. Valid, though to be clear, the Constitution doesn't apply for private company products. But more interestingly: what about right against self-incrimination? That's what Apple is pushing here; that by owning an iPhone, you may incriminate yourself, and actually it may end up happening whether you're actually guilty or not.
Regarding your second paragraph of the legality, Apple doesn't incriminate you even if they send the image off and a image reviewer deems something CSAM. If Apple does file a police report on this evidence or otherwise gives evidence to the police, the police will still have to prove that (A) the images do indeed depict sexually suggestive content involving a minor, and (B) you did not take an affirmative defense under 18 USC 2252A (d) [0], aka. they would have to prove that you had 3 or more actual illegal images and didn't take reasonable steps to destroy the images or immediately report them to law enforcement and given such law enforcement access to said photos.
The biggest issue with this is, of course, that Apple's accusation is most certainly going to be enough evidence to get a search warrant, meaning a search and seizure of all of your hard drives they can find.
Based off of your A and B there, I think we’re about to see a new form of swatting. How many people regularly go through all of their photos? Now if someone pisses someone else off and has physical access to their phone they just need to add 3 pictures to the device with older timestamps and just wait for the inevitable results.
The OP gets away with that argument, because many people who have such images are also, hopefully, also minors.
However, this is NOT the use case being applied for here. Holding those images, which are not part of known CP, will not be an issue, brining it up is a red herring. ~The issue most people have fruitfully started discussing is the scanning of content on your own phone.
Secondly - the correlation between holding known CP and child molestation IS, sadly, high.
I think Apple has always installed software on their users' devices with explicit intent to harm their customers.
This instance just makes is a little bit more obvious what the harm is but not enough to harm Apple's bottom line.
Eventually Apple will do something that will be obvious to everyone but by then it will probably be too late for most people to leave the walled garden (prison).
there is no e2e encryption of icloud photos or backups, and they never claimed to have that (except for keychain) - the FBI stepped in and prevented them from doing so years ago.
"We are so sorry that we raided your house and blew the whole in your 2 years old baby, but the hash from one of the pictures from your 38,000 photos library, matched our CP entry. Upon further inspection, an honest mistake of an operator was discovered, where our operator instead of uploading real CP, mistakenly uploaded the picture of a clear blue sky".
PS. On a personal note, Apple is done for me. Stick a fork in it. I was ready to upgrade after September especially since I heard touch-ID is coming back and I love my iPhone 8. But sure as hell this sad news means i8 is my last Apple device.
> the Photos app will hash images and compare to the list in the database. If it matches then presumably they do something about that. The database is only a list of KNOWN child abuse images circulating.
This seems fine as it's (a) being done on iCloud-uploaded photos and (b) replacing a server-side function with a client-side one. If Apple were doing this to locally-stored photos on iCloud-disconnected devices, it would be nuts. Once the tool is built, expanding the database to include any number of other hashes is a much shorter leap than compelling Apple to build the tool.
> it seems easy enough to clear that one up
Would it be? One would be starting from the point of a documented suspicion of possession of child pornography.
> Would it be? One would be starting from the point of a documented suspicion of possession of child pornography.
Ive actually witnessed someone go through this from someone else getting caught with these types of images and attempting to bring others with him. It’s not easy. It took him over a year of his life calling constantly asking when the charges will be dropped. They even image your devices on the spot yet still take them and stuff them in an evidence locker until everything is cleared up. You’re essentially an outcast to society while this is pending as well as most people assume if you have police interest related to child pornography you must be guilty.
Okay. Keep going with the scare tactics. Clearly you missed the real point of you being incredibly hyperbolic
I’d be happier if Apple wasn’t doing this at all. I’m not defending them necessarily but I am calling bullshit on your scare tactics. It’s not necessary.
> as a queer kid, I was terrified of my parents finding out
I think many queer people have a completely different idea of the concept of "why do you want to hide if you're not doing anything wrong" and the desire to stay private. Especially since anything sexual and related to queerness is way more aggressively policed than hetero-normative counterparts.
Anything "think of children" always has a second order affect of damaging queer people because lots of people still think of queerness as dangerous to children.
It is beyond likely that lots of this monitoring will catch legal/safe queer content - especially the parental-controls focused monitoring (as opposed to the gov'ment db of illegal content)
> Anything "think of children" always has a second order affect of damaging queer people because lots of people still think of queerness as dangerous to children.
For example, YouTube does this with some LGBT content. YouTube has demonitized LGBT content and placed it in restricted mode, which screens for "potentially mature" content[1][2].
YouTube also shadowbans the content[1], preventing it from showing up in search results at all.
From here[1]:
> Filmmaker Sal Bardo started noticing something strange: the views for his short film Sam, which tells the story of a transgender child, had started dipping. Confused, he looked at the other videos on his channel. All but one of them had been placed in restricted mode — an optional mode that screens “potentially mature” content — without YouTube informing him. In July of that year, most of them were also demonetized. One of the videos that had been restricted was a trailer for one of his short films; another was an It Gets Better video aimed at LGBTQ youth. Sam had been shadow-banned, meaning that users couldn’t search for it on YouTube. None of the videos were sexually explicit or profane.
How, how is it even morally good?? Will they start taking pictures of your house to see if you store drugs under your couch? Or cook meth in your kitchen??
What is moral is for society to be in charge of laws and law enforcement. This vigilante behavior by private companies who answer to no one is unjust, tyrannical and just plain crazy.
Unfortunately with SafetyNet, I feel like an investment into Android is also a losing proposition...I can only anticipate being slowly cut off from the Android app ecosystem as more apps onboard with attestation.
We've collectively handed control of our personal computing devices over to Apple and Google. I fear the long-term consequences of that will not be positive...
1) Google doesn't release devices without unlockable bootloaders. They have always been transparent in allowing people to unlock their Nexus and Pixels. Nexus was for developers, Pixels are geared towards the end user. Nothing changed with regards to the bootloaders.
2) Google uses Coreboot for their ChromeOS devices. Again, you couldn't get more open than that if you wanted to buy a Chromebook and install something else on it.
3) To this day, app sideloading on Android remains an option. They've even made it easier for third party app stores to automatically update apps with 12.
4) AOSP. Sure, it doesn't have all the bells and whistles as the latest and greatest packaged up skin and OS release, but all of the features that matter within Android, especially if you're going to de-Google yourself, are still there.
Any one of those points, but consider all four, and I have trouble understanding why people think REEEEEEEE Google.
So you can't play with one ball in the garden (SafetyNet), you've still got the rest of the toys. That's a compromise I'm willing to accept in order to be able to do what I want to and how I want to do it. (Eg, Rooting or third party roms.)
If you don't like what they do on their mobile OS, there's nothing that Google is doing to lock you into a Walled Garden to where the only option you have is to completely give up what you're used to...
...Unlike Apple. Not one iOS device has been granted an unlockable bootloader. Ever.
> Google doesn't release devices without unlockable bootloaders. They have always been transparent in allowing people to unlock their Nexus and Pixels.
True but misleading. If you unlock your bootloader, you can no longer use a lot of apps, including Snapchat, Netflix, Pokemon Go, Super Mario Run, Android Pay, and most banking apps. And before you say this isn't Google's fault, know that they provide the SafetyNet API, which has no legitimate, ethical use cases, and is what allows all of the aforementioned apps to detect whether the device has been modified, even if the owner doesn't want that.
This really depends on the apps. I have used over 10 banking apps on an Android phone with an unlocked bootloader without ever encountering any issues. On a device rooted using Magisk, the MagiskHide masking feature successfully bypasses the apps' root checks in my experience.
You're right that more advanced forms of hardware attestation would defeat the masking if Google eventually implements them.
I'm hoping that Microsoft's support for Android apps and integration with Amazon Appstore in Windows 11 will hedge against Google's SafetyNet enforcement by making an alternative Android ecosystem (with fewer Google dependencies) more viable. Apps that require SafetyNet would most likely not work on Windows 11.
Obviously anecdotal, but literally none of those examples I care to use on my phone anyway. Overtime, my phone has just become a glorified camera with some messaging features.
I've used banking apps and Google pay on my rooted unlocked phone for several years now. True, I'm still on Android 9, so perhaps it will be worse when I upgrade.
Using Magisk and Magisk Hide.
Though oddly enough, none of my banking/credit card apps make an issue of being rooted, so they're not even in the Magisk Hide list.
That is likely to change in the near future. Hardware attestation of bootloader state is increasingly available. This is currently bypassed by pretending to be an older device that doesn't possess that capability. As long as device bootloaders continue to differentiate between stock and custom OS signing keys it won't be possible to bypass SafetyNet.
Yeah, it seems you are right. I haven't been actively tracking the custom ROM market, but it seems Google is trying really hard to achieve widespread hardware attestation. Or they could just be waiting until all the old devices are off the market, so all of the "Hardware attestation: Unsupported" response cases can be marked as UnlockedBootloader with great confidence.
SafetyNet also exists to prevent people from running Android apps on platforms other than Android. You can't use SafetyNet-enabled apps on Anbox, which is what SailfishOS uses as their Android compatibility layer, nor on emulators.
If you wanted to do a WSL but for Android, SafetyNet guarantees many apps won't work.
It also puts alternative Linux-based mobile operating systems, like SailfishOS or postmarketOS, at a disadvantage because they won't be able to run certain Android apps for no real technical reason other than the protection of Google's money firehouse.
For instance: The McDonald's app uses SafetyNet and won't run on an unlocked device.[1] Google doesn't place any restrictions on which types of apps can use SafetyNet. Banking apps tend to use it, but so do an increasing number of apps that clearly shouldn't need it.
(For the record, I don't think SafetyNet should exist at all, but if Google is pretending it's for the user's security and not just to allow developers to make it harder to reverse engineer their fast food apps, they should at least set some boundaries.)
It's frustrating that Google has fostered an ecosystem where not all "Android apps" work on vanilla Android.
I think a system to verify the integrity of the operating system and make the user aware of any changes is a Good Thing. Of course, the user should be in control of what signing keys are trusted and who else gets access to that information.
Instead, what Google has done is allowed app developers to check that the user isn't doing anything surprising - especially unprofitable things like blocking ads or spoofing tracking data. Since Google profits from ads and tracking, I must assume a significant part of their motivation is to make unprofitable behavior inconvenient enough most people won't do it.
"1) Google doesn't release devices without unlockable bootloaders. They have always been transparent in allowing people to unlock their Nexus and Pixels. Nexus was for developers, Pixels are geared towards the end user. Nothing changed with regards to the bootloaders."
This is not accurate. Pixels that come from Verizon have bootloaders that cannot be fully unlocked.
That's because Verizon doesn't want you using a discounted phone with another carrier. If they let you unlock your phone, you could flash a stock radio and ditch Verizon for Google Fi or AT&T. Different issue at play.
As long as you buy a Pixel directly from Google or one of a few authorized resellers, it is unlockable. (I recommend B&H, they help you legally evade the sales tax.) You can also use a Pixel you buy from Google with Verizon.
Not to nitpick here, but there is no way any device you buy from Verizon is discounted, regardless of what they advertise. Everyone pays _full_ price for any device they get on contract or payment plan.
Back when contract pricing was a more regular thing, I ended up doing the math on the plan rate after I requested for the device contract subsidy to be removed as I didn't want to upgrade the device. I had a Droid DNA at the time.
The monthly rate dropped by $25 just to keep the same device. (Nevermind that I had to ASK for them to not continue to charge me the extra $25/mo after 2 years)
$25 a month for 24 months is $600.
The device on contract was $199.
Full retail price if you didn't opt in for a 2 year contract when getting it? $699.
So I ended up paying an extra $100 for the device than if I had just bought it outright.
Even if the offerings/terms are different now... Verizon, regardless of how they market anything, absolutely makes you pay full price (and then some) for the device you get at 'discount.'
It's funny now that we're seeing people being able to BYOD to Verizon these days and AT&T is the one engaging in aggressive whitelisting.
Other carriers will provide a bootloader unlock code to you on request once the device is paid off. As far as I know, Verizon refuses to do so under any circumstances for any device.
I didn't check HN for a while so chances are no one will ever see this response. Nonetheless! I am well aware that bootloader and network locks are different things.
In many cases you have to get an authorization code from the carrier that sold the device in order to unlock the bootloader. That may or may not involve retrieving a code from your device, and it may or may not also involve interacting with the OEM. It depends on the details negotiated between the carrier and the OEM.
For example, T-Mobile sells devices that are both bootloader and network locked but (for some devices) provides a process to unlock both of those once certain criteria have been met (length of device ownership, account standing, etc). To be perfectly clear, for devices sold by T-Mobile they generally have to authorize you somehow before the OEM will send you a bootloader unlock code.
> Except, uh, GPS. Even for third party navigation apps.
AOSP does support GPS without needing any additional software, but does not have built-in support for Wi-Fi and cell tower triangulation. As you mentioned, UnifiedNlp (bundled with microG) optionally supports triangulation using a variety of location providers (including offline and online choices) for a faster location lock.
Agreed, it's shitty of Google to have moved so much functionality into it's proprietary Play Services. Push Notifications API being in it bothers me even more. Unfortunately until Linux mobile operating systems catch up in functionality though I'm going to stick with GrapheneOS.
Windows, for all the shit they do to antagonize users, does let you choose what programs you install on your PC without forcing you to use an app store.
The only real way to avoid this is to sandbox all Windows and macOS systems and only run them from Linux hosts, but you're still taking a performance hit when you do this and sometimes usability just isn't up to par with the other two.
Correct me if I'm wrong, but i believe Microsoft enforces certificate checks, meaning you need to buy certificates regularly and be in good standing with the company so the apps you signed with the certificates will run on Windows without issues. I believe Defender can act similarly to Apple's Gatekeeper.
I think on Windows it's not only based on a missing signature. I sometimes get the "this file may damage your computer" message. There's also an "ignore" button hidden below a "more" button, but it in the end it lets you use it. But it doesn't always happen. [0]
It's not very user friendly, but it might be a bit more intuitive than apple's special dance of click right -> open to bypass said controls.
---
[0] For example, the Prometheus exporter for windows x64 is not signed and doesn't trigger the alert. I can download it (no alert) click open (no alert) and it runs . The x32 version does have a "this may damage your computer" alert in the browser (edge).
I don't think it's implausible that I carry around a phone that has mail, contacts, calendars, photos, and private chat on it. And then, have a second, older phone that has like Instagram and mobile games. It's tragic.
Unfortunately a big bulk of the data they profit off of is simply the ads and on-platform communication and behavior. Doesn't really matter if you use a different device if you still use the platform. Sure, it's slightly better, but it really isn't a silver bullet if you're still using it. And this is coming from someone who does this already.
I don't really mind if they make a profit off of the free things I use.
What I mind is when my personal life, the stuff that _actually_ matters, is being monitored or has a backdoor that allows ANY third party easy access to monitor it.
Yes, my history was Linux 95-04, Mac 04-15, and now back to Linux from 2015 onwards.
Its been clear Tim Cook was going to slowly harm the brand. He was a wonderful COO under a visionary CEO-type, but he holds no particular "Tech Originalist" vision. He's happy to be part of the BigTech aristocracy, and probably feels really at home in the powers it affords him.
Anyone who believes this is "just about the children" is naive. His chinese partners will use this to crack down on "Winnie the Poo" cartoons and the like...before long questioning any Big Pharma product will result in being flagged. Give it 5 years at max.
I don’t think anyone is arguing that making it harder to abuse children is a bad thing. It’s what is required to do so that is the bad thing. It’d be like if someone installed microphones all over every house to report on when you admit that you’re guilty to bullying. No one wants bullying, but I doubt you want a microphone recording everything and looking for certain trigger words. Unless you have an Alexa or something, then I guess you probably wouldn’t mind that example.
Alexa and iPhones with Siri enabled, and Android phones, are all continuously listening with their microphones for their wake word, unless you've specifically turned the feature off.
The difference is that the Alexa connects to your wifi, so if you wanted to, you could trivially tell if it's communicating when it shouldn't be. When I worked at Amazon, I was given the impression that the system that handles detecting the wake word was implemented in hardware, and the software system that does the real speech recognition doesn't "wake up" or gain access to the audio channel unless the wake word is detected by that hardware system -- and it's very obvious when you've woken it up (the colored ring lights up, it speaks, etc.)
Echo devices also sit in one room. If you're like most people you take your phone everywhere, which means that if it's spying on you, it could literally have a transcript of every word you spoke the entire day, as well as any people you've been around. To make matters worse, it would be difficult to tell if that was happening. Unless you're an uber-hacker who knows how to root an iPhone, or a radio geek who knows enough to monitor their device's cellular transmissions, good luck figuring out whether Siri is listening to and passing on audio that it shouldn't. The problem is that phones have so many apps and responsibilities -- given that they are essentially full computers -- these days that nonstop data transfer on a wifi network from my phone wouldn't be alarming: it might be backing up pictures to a cloud, or syncing the latest version of apps, etc.
I think the dedicated devices like Echo/Alexa are what you should buy if you're the most privacy-sensitive, since they have zero reason to be uploading to the Internet unless you're actively talking to them, and they have zero reason to be downloading unless they're receiving a software patch, which should be very rare. And because they're on your wifi (not cell) you can monitor their network traffic very easily.
It's not unreasonable to expect the speech recognition models to be run locally.
As to the wake word point, I agree. I don't think alexa/siri/etc are currently bad or disrespecting privacy. I actually have a smart home with a voice assistant.
However, my smart home is all local mesh network (zwave and zigbee) based through a FOSS bridge that doesn't talk to the internet. All lights are through smart switches, not bulbs. The end result is such that if the voice assistant service ever pisses me off, I can simply disconnect from it.
If you read my comments in this article, I think I come off as a tin foil hat wearing lunatic, to some degree at least.
But actually, I'm not a super privacy paranoid person. Telemetry, voice recognition datasets, etc... I think those are a reasonable price to pay for free stuff! I just want to have my thumb on the scale and a go-bag packed for when/if these services become "evil" ;)
No, I'm being serious. technology like this could be very beneficial.
there's a good chance that if we continue to improve surveillance law enforcement agencies and justice departments could begin to focus on rehabilitation and growing a kinder world.
right they are like firemen trying to put out fires after the building has been ruined
if it doesn't work we're doomed anyway, so what's the problem?
This can happen only because whenever any slippery-slope action was taken previously, there is an army of apologists and "explainers" who rush to "correct" your instinctive aversion to these changes. It's always the same - the initial comment is seemingly kind, yet with an underlying menace, and if you continue to express opposition, they change tack to being extremely aggressive and rude.
See the comment threads around this topic, and look back to other related events (notably the tech giants censoring people "for the betterment of society" in the past 12 months).
Boiling a frog may happen slowly, but the water continues to heat up even if we pretend it doesn't. Very disappointed with this action by Apple.
This is typical obedient behavior. Some abused spouses get through lengths to come up with excuses for their partners. Since I don't own an iOS device, I don't really care about this specific instance.
But I don't want these people normalizing deep surveillance and fear that I have to get rid of my OSX devices when this trend continues.
Yeah, but it's also useful for getting my friends on board. I think it's likely that I eventually start hosting matrix or some alternative, but my goal is to be practical here, yet still have a privacy protecting posture.
My friends are significantly more technical (and paranoid) than the average user. We've already discussed it.
But... yeah. Yeah. Which is why I got as many people on Signal as I could. Baby steps. The goal here, right now, is reasonable privacy, not perfection.
> Signal is still a centralised data silo where by default you trust CA to verify your contacts identify.
You can verify the security number out-of-band, and the process is straightforward enough that even nontechnical users can do it.
That's as much as can possibly be done, short of an app that literally prevents you from communicating with anyone without manually providing their security number.
I said, 'by default'. I know that it is possible to do a manual verification, but I am yet to have a chat with a person who would do that.
Also, the Signal does not give any warnings or indication that chat partner identify is manually verified. Users are supposed to trust Signal and not ask difficult questions
> I said, 'by default'. I know that it is possible to do a manual verification, but I am yet to have a chat with a person who would do that.
I'm not sure what else you'd expect. The alternative would be for Signal not to handle key exchange at all, and only to permit communication after the user manually provides a security key that was obtained out-of-band. That would be an absolutely disastrous user experience.
> Also, the Signal does not give any warnings or indication that chat partner identify is manually verified
That's not true. When you verify a contact, it adds a checkmark next to their name with the word "verified" underneath it. If you use the QR code to verify, this happens automatically. Otherwise, if you've verified it manually (visual inspection) you can manually mark the contact as verified and it adds the checkmark.
Ahem. I'd expect something that most xmpp clients could do 10+ years ago with OTR: after establishing an encrypted session the user is given a warning that chat identify of a partner is not verified, and is given options on how to perform this verification.
With CA you can make a mild warning that identity is verified by Signal, and give an options to dismiss warning or perform off-the-band verification.
Not too disastrous, no?
> That's not true. When you verify a contact, it adds a checkmark next to their name with the word "verified"
It has zero effect if the user is given no indication that there should be the word verified.
It is not true what you say. This [1] is what a new user sees in Signal - absolutely zero indication. To verify a contact user must go to "Conversation settings* and then "View safety number". I'm not surprised nobody ever established a verified session with me.
I did this with all my friends who are on Signal, and explained the purpose.
And it does warn about the contact being unverified directly in the chat window, until you go and click "Verify". The problem is that people blindly do that without understanding what it's for.
Hm, you're right. What I was thinking of is the safety number change notification. But if you start with a fresh new contact, it's unverified, but there's no notification to that effect - you have to know what to do to enable it.
That's the point, see my other comment [1]. User has to know about it to activate manual verification, and by default he just has to trust Signal's CA that his contact is, indeed, the one he is talking to.
I agree Signal’s default security is a whole lot better than iMessage, which trusts Apple for key exchange and makes it impossible to verify the parties or even the number of parties your messages are being signed for. Default security is super important for communication apps because peers are less likely to tweak settings and know about verification screens.
If my parents had the feature to be alerted about porn their kid’s device while I was a teen they would have sent me to a conversion camp, and that is not an exaggeration.
Apple thinks the appropriate time for queer kids to find themselves is after they turn 18.
If you're just downloading and looking at porn, no problem. It only becomes an issue if you're sharing porn via Messages or storing it in iCloud. And to be fair, I don't think they're alerted to the nature of the pornography, so you might be able to avoid being outed even if you're sharing porn (or having porn shared with you).
Edit: I'm wrong in one respect: if the kid under 13 chooses to send a message with an explicit image despite being warned via notification, the image will be saved to a parental controls section. This won't happen for children >= 13.
I'm impressed, it actually has smooth scrolling unlike OsmAnd which is very slow loading tiles in.
Critical points I'd make about Organic Maps, I'd want a lower inertia setting so it scrolls faster, and a different color palette.. they are using muddy tones of green and brown.
Does it let you select from multiple routes? I've been using Pocketmaps, but it only gives you a single option for routing, which can lead to issues in certain contexts
You can still use Google Maps without an account and "incognito". I wish they'd allow app store usage without an account though- similar to how any Linux package manager works.
That's not really the issue. The issue is that for google maps to work properly, it requires that the Play services are installed. Play services are a massive semi-monolithic blob that requires tight integration with Google's backend, and deep, system-level permissions to operate correctly.
People need to remember that most of Android got moved into Play Services. It was the only way to keep a system relatively up to date when the OEMs won't update the OS itself.
Yeah, it's a dependency... as much as the Google Maps APK needing to run on Android itself.
I use maps on my phone on a regular basis - I would vastly prefer to have something less featured and stable versus hacking the crap out of my phone. But that's good to know.
One workaround is to use the mobile web app, which is surprisingly pretty decent for a web app. And because it's a web app, you can even disable things like sharing your location if you want to
I've just tested Google Maps on an Android device without Google Play Services or microG. The app works fine, although every time you cold-start it, the app shows an alert (which can be dismissed but not disabled) and a notification (which can be specifically disabled) that Google Play Services is unavailable. On an Android device with microG, Google Maps works without showing the alert or the notification about Google Play Services.
In addition to F-Droid, you can get Aurora Store (which is on F-Droid) which lets you use an anonymous login to get at the Play Store. I use it for a couple free software apps that aren't on F-Droid for some reason.
I also recommend Aurora Store as a complete replacement for the Play store. The one thing is that I've never tried using apps that I paid for on it but it works very well for any free apps. There is an option to use a Google account with Aurora but I've only ever used the anonymous account.
The only slight dowbside is that I haven't figured out how to auto update appd, so your apps will get out of date without you being notified and you have to manually do it. This problem might literally be solved by a simple setting thay I haven't bothered to look for, IDK.
On the plus side it includes all the official play store apps, along side some that aren't allowed by play store.
For examples, Newpipe, the superior replacement YouTube app that isn't allowed on play store due to it subverting advertisements and allowing a few features that are useful for downloading certain things.
It's not the device that's less secure or private in this context, it's the services. There's no reason you couldn't just continue using your NAS for photo backup and Signal for encrypted-communications completely unaffected by this.
Apple seems to not have interest in users devices, which makes sense -- they're not liable for them. They _do_ seem interested in protecting the data that they house, which makes sense, because they're liable for it and have a responsibility to remove/report CSAM that they're hosting.
So they should do that scanning server side at their boundary instead of pushing software to run on phones with potential to extend scope later if no push back.
That's not the issue. The issue is that they have shipped spyware to my device. That's a massive breach of trust.
I suspect that this time next year, I'll still be on ios, despite my posturing. I'm certainly going to address icloud in the next few weeks - specifically, disusing it. However, I would be surprised if I'm still on ios a year or two after that.
What Apple has done here isn't horrible in the absolute sense. Instead, it's a massive betrayal of trust with minimal immediate intrusiveness; and yet, a giant klaxon that their platform dominance in terms of privacy is coming to an end
> with icloud photos csam, it is also a horrifying precedent
That precedent was set many years ago.
>a man [was] arrested on child pornography charges, after Google tipped off authorities about illegal images found in the Houston suspect’s Gmail account.
Microsoft’s “PhotoDNA” technology is all about making it so that these specific types of illegal images can be automatically identified by computer programs, not people.
PhotoDNA converts an image into a common black-and-white format and size the image to a uniform size, Microsoft explained last year while announcing its increased efforts at collaborating with Google to combat online child abuse.
No, you're not a dinosaur. It is entirely reasonable for a hosting provider not to want certain content on their servers. And it is also quite reasonable to want to automate the process of scanning for it.
My physical device, on the other hand, is (supposed to be) mine and mine alone.
I think no matter what devices you use, you've nailed down the most important part of things which is using apps and services that are flexible, and can be easily used on another platform.
What I am reminded of is all of the now seemingly prophetic writing and story telling in a lot of cyber-punk-dystopian anime about the future of the corporate state, and how mega corps rule EVERY THING.
What I always thought was interesting was that the Police Security Services in Singapore were called "CISCO" -- and you used to see these swat-APV-type vans driving around and armed men with CISCO emblazened on their gear/equip/vehicles...
I always was reminded of Cyberpunk Anime around that.
Interesting! But actually this is not the singly thing with an "interesting" name in Singapore - well at least as long as you speak Czech. ;-)
You see, the mass transit company in Singapore is handled by the Singapore Municipal Rapid Transit company, abbreviated SMRT. There is also a SMRT corporation (https://en.wikipedia.org/wiki/SMRT_Corporation), SMRT buses, the SMRT abbreviation is heavily used on train, stations, basically everywhere.
Well, in Czech "smrt" means literarily death. So let's say for Czech speakers riding the public transport in Singapore can be a bit unnerving - you stand at a station platform and then a train with "DEATH" written on it in big letters pulls into the station. ;-)
I’ve been thinking about switching my main email to Fastmail from Apple, for portability in case the anti-power-user trend crosses my personal pain threshold.
But if your worry is governments reading your mail, is an email company any safer? I’m sure FM doesn’t want to scan your mail for the NSA or its Australian proxy, but do they have a choice? And if they were compelled, would they not be prevented from telling you?
“We respect your privacy” is exactly what Apple has been saying.
I think self-hosting email has too many downsides (spam filtering, for example) to be worth it; I’m more concerned about losing my messages (easily solved with POP or mbox exports while still using a cloud account) than government data sharing. Email is unencrypted in transit anyway, and it’s “industry standard” to store it in clear text at each end.
complicated. As long as they require a reasonable warrant (ha!), I'm fine. Email is an inherently insecure protocol and ecosystem, anyways.
I haven't used email for communication that I consider to be private for a while - I've moved most, if not all, casual conversation to signal, imessage. Soon, I hope to add something like matrix or mattermost into the mix.
My goal was never to be perfect. My goal is to be able to easily remove myself from an invasive spyware ecosystem, and bring my friends along, with minimal impact.
I have been self-hosting email for 7 years successfully. But it required a physical server in a reputable datacenter, setting up Dovecot, Exim, SpamAssasin, reverse-DNS, SPF, DKIM. It took a bit of time to gain IP reputation but then it has worked flawlessly since. Occasionally some legit mail is flagged as spam or vice versa but it is not worse than any other mail provider. So it can be done! But my first attempts to do that on a VPS failed as IP blocks of VPS providers are often hopelessly blacklisted in major email providers.
Have you found any decent google maps alternatives? I'd love to find something but nothing comes close as far as I've found. Directions that take into account traffic is the big thing that I feel like nobody (other than Apple, MS, etc.) will be able to replicate.
Have you tried using the website? I've had some luck with that on postmarketOS, and it means you don't need to install Play services to use it.
I use Citymapper simply because I find it better (for the city-based journeys that are my usual call for a map app) - but it not being a Google ~data collection device~ service is no disadvantage.
At least, depending why you dislike having everything locked up with Google or whoever I suppose. Personally it's more having everything somewhere that troubles me, I'm reasonably happy with spreading things about. I like self-hosting things too, just needs a value-add I suppose, that's not a reason in itself for me.
> While I still use google maps, I've been trialing out OSM alternatives for a minute.
Is there a way to set up Android to handle shared locations without Google Maps?
Every time someone shares location with me (in Telegram) it displays as a tiny picture and once I click it it says I have to install Google Maps (I use an alternative for actual maps and don't have Google Maps installed). So I end up zooming the picture and then finding the location on the map manually.
> it is also a horrifying precedent that the device I put my life into is scanning my photos and reporting on bad behavior
apples new customers are the various autocratic regimes that populate the earth. apples customers used to be human beings. there exist many profiteers in mountain view, cupertino menlo and atherton in the service of making our monopolies more capable of subjugating humanity.
I also use Fastmail but being fully aware that Australia where its hosted is part of the 5 eyes spy network, and also one of the countries acting extreamly oppressive towards its citizens when it comes to covid restrictions.
So I don't actually expect my mail to be private. But at least it's not Google.
I'm just trying to buy time until open source and secure alternatives have addressed these problems. Apple doing this has moved my timeframes up by a few years (unexpectedly).
Take a look at https://internxt.com. Been using them for a couple weeks and am incredibly impressed. Great team, great product, just great everything. It was exactly what I was looking for
Yeah, sorry, I mixed them up in my head. I'm currently running Lineage on a PH-1, not Postmarket. I would not consider what I have set up to be "production ready", but I'm going to spend some time this weekend looking into what modern hardware can run Lineage or other open mobile OSes
Sorry, wasn't ripping on Lineage. It's more the entire ecosystem. I mentioned in prior comments, but I think that in a few years we'll have a practical, open source, third party in the mobile phone os wars - one that has reasonable app coverage.
I don't care if I use google or apple services, btw, I just want the data flow to be on my terms.
> I'm not sure that I support what they are implementing for child accounts (as a queer kid, I was terrified of my parents finding out)
If you don't want your parents to look at your phone, you shouldn't be using a phone owned by your parent's account. The new feature doesn't change this calculus.
As a queer kid, would you enjoy being blackmailed by someone who tricked you into not telling your parents?
I remember an Apple conference where Tim Cook personally assured us that Apple is fully committed to privacy, that everything is so secure because the iPhone is so powerful that all necessary calculations can happen on the device itself, and that we are "not the product". I think the Apple CEO said some of this in the specific context of speech processing, yet it seemed a specific case of a general principle upheld by Apple.
I bought an iPhone because the CEO seemed to be sincere in his commitment to privacy.
What Apple has announced here seems to be a complete reversal from what I understood the CEO saying at the conference only a few years ago.
- It's run for Messages in cases where a child is potentially viewing material that's bad.
- It's run _before upload to iCloud Photos_ - where it would've already been scanned anyway, as they've done for years (and as all other major companies do).
To me this really doesn't seem that bad. Feels like a way to actually reach encrypted data all around while still meeting the expectations of lawmakers/regulators. Expansion of the tech would be something I'd be more concerned about, but considering the transparency of it I feel like there's some safety.
True. But, first, it also means anyone, anywhere, as long as they use iOS, is vulnerable to what the US considers to be proper. Which, I will agree, likely won’t be an issue in the case of child pornography. But there’s no way to predict how that will evolve (see Facebook’s ever expanding imposing of American cultural norms and puritanism).
Next, it also means they can do it. And if it can be done for child pornography, why not terrorism? And if it can be done for the US’ definition of terrorism, why not China's, Russia's or Saudi Arabia's? And if terrorism and child pornography, why not drugs consumption? Tax evasion? Social security fraud? Unknowingly talking with the wrong person?
Third, there apparently is transparency on it today. But who is to say it's possible expansion won't be forcibly silenced in the same way Prism's requests were?
Fourth, but that's only because I slightly am a maniac, how can anyone unilaterally decide to waste the computing power, battery life and data plan of a device I paid for without my say so? (probably one of my main gripes with ads)
All in all, it means I am incorporating into my everyday life a device that can and will actively snoop on me and potentially snitch on me. Now, while I am not worried today, it definitely paves the way for many other things. And I don't see why I should trust anyone involved to stop here or let me know when they don’t.
> is vulnerable to what the US considers to be proper
This stirs up all sorts of questions about location and the prevailing standards in the jurisdiction you're in. Does the set of hashes used to scan change if you cross an international border? Is the set locked to whichever country you activate the phone in? This could be a travel nightmare.
As this isn't a list of things the U.S. finds prudish, but actual images of children involved in being/becoming a victim of abuse, it doesn't look like there are borders, at least according to the official Apple explanation[0].
If the situation OP suggests happens in the form of FBI/other orgs submitting arguably non-CSAM content, then Apple wouldn't be complicit or any wiser to such an occurrence unless it was after-the-fact. If it happens in a way where Apple decides to do this on their own dime without affecting other ESPs, I imagine they wouldn't upset CCP by applying US guidance to Chinese citizens' phones.
It’s still a US database, and from it’s goal it would fit US defintion of child abuse.
I am no expert of what it means to the US, but I’d assume there can be a lot of definition of what “child”, “abuse” and “material” mean depending on beliefs.
I think your points are mostly accurate, and that's why I led with the bit about the EFF calling attention to it. Something like this shouldn't happen without scrutiny.
The only thing I'm going to respond to otherwise is this:
>Fourth, but that's only because I slightly am a maniac, how can anyone unilaterally decide to waste the computing power, battery life and data plan of a device I paid for without my say so? (probably one of my main gripes with ads)
This is how iOS and apps in general work - you don't really control the amount of data you're using, and you never did. Downloading a changeset of a hash database is not a big deal; I'd wager you get more push notifications with data payloads in a day than this would be.
Battery life... I've never found Apple's on-device approaches to be the culprit of battery issues for my devices.
I think I'd add to your list of points: what happens when Google inevitably copies this in six months? There really is no competing platform that comes close.
> what happens when Google inevitably copies this in six months? There really is no competing platform that comes close.
Then you have to make a decision about what matters more. Convenience and features, or privacy and security.
I've made that decision myself. I'll spend a bit more time working with less-than-perfect OSS software and hardware to maintain my privacy and security.
> This is how iOS and apps in general work - you don't really control the amount of data you're using, and you never did. Downloading a changeset of a hash database is not a big deal; I'd wager you get more push notifications with data payloads in a day than this would be.
Oh, definitely. But I am given the ability to remove those apps, or to disable these notifications, and I consider the ones I leave to be of some value to me? This? On my phone? It’s literal spyware.
But, as I said, it's only because I am a maniac regarding how tools should behave.
The point you add about Google, however, is a real issue. I’ve seen some people mention LineageOS and postmarketOS. But isn’t really a solution for most people.
The problem with the “there’s no way to predict how this will evolve” argument is that it would apply equally as well years before this was announced, and to literally anything that Apple could theoretically do with software on iPhones.
Well it does - people have been pointing out downfalls of the walled garden, locked boot loaders and proprietary everything on "iDevices" for years, pointing to scenarios similar to the one unfolding right now.
That’s the problem with closed-source software in general, and one that can be remotely updated in particular.
And I am writing that as a (now formerly?) happy iPhone user. It’s just that I don’t trust it or Apple as much anymore.
And although there is no way of predicting with certainty how it will evolve, most past successful similar processes and systems usually went down the anti-terrorism & copyright enforcement roads.
The US is very openly, publicly moving down the road called The War on Domestic Terrorism, which is where the US military begins targeting, focusing in on the domestic population. The politicians in control right now are very openly stating what their plans are. It's particularly obvious what's about to happen, although it was obvious at least as far back as the Patriot Act. The War on Drugs is coming to an end, so they're inventing a new fake war to replace it, to further their power. The new fake war will result in vast persecution just as the last one did.
You can be certain what Apple's scanning is going to be used for is going to widen over time. That's one of the few obvious certainties with this. These things are a Nixonian wet dream. The next Trump type might not be so politically ineffectual; more likely that person will be part of the system and understand how to abuse & leverage it to their advantage by complying with it rather than threatening its power as an outsider. Trump had that opportunity, to give the system what it wanted, he was too obtuse and rigid, to understand he had to adapt or the machine would grind him up (once he started removing the military aparatus that was surrounding him, like Kelly and Mattis, it was obvious he would never be allowed to win a second term; you can't keep that office while being set against all of the military industrial complex including the intelligence community, it'll trip you up on purpose at every step).
The US keeps getting more authoritarian over time. As the government gets larger and more invasive, reaching ever deeper into our lives, that trend will continue. One of the great, foolish mistakes that people make about the US is thinking it can be soft and cuddly like Finland. Nations and their governments are a product of their culture. So that's not what you're going to get if you make the government in the US omnipotent. You're going to get either violent Latin American Socialism (left becomes dominant) or violent European Fascism (right becomes dominant). There's some kind of absurd thinking that Trump was right-wing, as in anti-government or libertarian; Trump is a proponent of big government, just as Bush was, that's why they had no qualms about spending like crazy (look at the vast expansion of the government under Bush); what they are is the forerunners to fascism (which is part of what their corporatism is), they're right wingers that love big government, a super dangerous cocktail. It facilitates a chain of enabling over decades; they open up pandora boxes and hand power to the next authoritarian. Keep doing that and eventually you're going to get a really bad outcome (Erdogan, Chavez, Putin, etc) and that new leadership will have extraordinary tools of suppression.
Supposed political extremists are more likely to be the real target of what Apple is doing. Just as is the case with social media targeting & censoring those people. The entrenched power base has zero interest in change, you can see that in their reaction to both Trump and Sanders. Their interest is in maintaining their power, what they've built up in the post WW2 era. Trump and Sanders, in their own ways, both threatened what they constructed. Trump's chaos threatened their built-up system, so the globalists in DC are fighting back, they're going to target what they perceive as domestic threats to their system, via their new War on Domestic Terrorism (which will actually be a domestic war on anyone that threatens their agenda). Their goal is to put systems in place to ensure another outsider, anyone outside of their system, can never win the Presidency (they don't care about left/right, that's a delusion for the voting class to concern themselves about; the people that run DC across decades only care if the left/right winner complies with their agenda; that's why the Obamas and Clintons are able to be so friendly with the Bushes (what Bush did during his Presidency, such as Iraq, is dramatically worse than anything Trump did, and yet Bush wasn't impeached, wasn't pursued like Trump was, the people in power - on both sides - widely supported his move on Iraq), they're all part of the same system so they recognize that in eachother, and reject a Trump or Sanders outsider like an immune system rejecting a foreign object).
The persistent operators in DC - those that continue to exist and push agenda regardless of administration hand-offs - don't care about the floated reason for what Apple is doing. They care about their power and nothing else. That's why they always go to the Do It For The Kids reasoning, they're always lying. They use whatever is most likely to get their agenda through. The goal is to always be expanding the amount of power they have (and that includes domestically and globally, it's about them, not the well-being of nations).
We're entering the era where all of these tools of surveillence they've spent the past few decades putting into place, will start to be put into action against domestic targets en masse, where surveillence tilts over to being used for aggressive suppression. That's what Big Tech is giddily assisting with the past few years, the beginning of that switch over process. The domestic population doesn't want the forever war machine (big reasons Trump & Sanders are so popular, is that both ran on platforms opposed to the endless foreign wars); the people that run DC want the forever war machine, it's their machine, they built it. Something is going to give, it's obvious what that's going to be (human liberty at home - so the forever wars, foreign adventurism can continue unopposed).
Systems of power always act to defend and further that power. Historically (history of politics, war, governmental systems) or psychologically (the pathology of power lusting) there isn't anything surprising about any of it, other than perhaps that so many are naive about it. I suspect most of that supposed naivety is actually fear of confrontation though (you see the same thing in the security/privacy conflicts), playing dumb is a common form of self-defense against confrontation. To recognize the growing authoritarianism, requires a potent act of confrontation mentally (and then you either have to put yourself back to sleep (which requires far more effort), or deal with the consequences of that stark reality laid bare).
The one on which a social network used by nearly 3 billion people worldwide (Facebook) bans pictures of centuries old world famous paintings containing naked women, as if it were pornography.
The one on which a video hosting platform used by over 2 billion people (YouTube) rates content as 18+ as soon as it, even briefly, shows a pair of breasts.
So your argument is, if you've done nothing wrong, you have nothing to worry about. Really? Will you feel the same when Apple later decides to include dozens more crimes that they will screen for, surreptitiously? All of which are searches without warrants or legal oversight?
Let me introduce you to someone you should know better. His name is Edward Snowden. Or Louis Brandeis, who is spinning in his grave right about now.
The US Fourth Amendment exists for a damned good reason.
You do realize you could get this message across without the needlessly arrogant tone, yeah? All it does is make me roll my eyes.
Anyway, that wasn't my stated position. I simply pointed out that this is done for a subset of users (where there's already existing reasons to do so, sub-13 and all) and that on syncing to iCloud this _already happens anyway_.
I would gladly take this if it removes a barrier to making iCloud E2E encrypted; they are likely bound to do this type of detection, but doing it client-side before syncing feels like a sane way to do it.
Actaully, I don't think it will remove a barrier for iCloud E2E encryption at all. On the contrary. All it will remove, is the barrier for what we find acceptible for companoes like Apple to implement. I think Apple made a very intrusive move, one that we will come to accept over time. After that, a next move follows...and so on. That's the barrier being moved. A point will be reached when E2E encryption is nothing more than a hoax, a non-feature with no added value. A mirage of what it is supposed to be.
All of these things are implemented under the Child Protection flag. Sure, we need child protection, we need it badly, but the collateral is huge and quite handy too for most 3 letter agencies. I don't have the solution.
The other day my 3 year old son had a rash, I took pictures of it over the course of a few days. A nude little boy, pictures from multiple angles. I showed my dermatologist. What will happen in the future? Will my iPhone "flag" me as a potential child predator? Can I tell it I'm a worried dad? Do I even have to be thinking about these things?
> I would gladly take this if it removes a barrier to making iCloud E2E encrypted; they are likely bound to do this type of detection, but doing it client-side before syncing feels like a sane way to do it.
But there is an issue there. Now there is a process on your phone capable of processing unencrypted data on your phone and communicating with the outside world. That is spyware which will almost certainly be abused in some way.
Hmm, seems to me since most smart criminals understand not to leave a digital footprint that what Apple will catch is those are idiots and make a honest mistake and those how are dumb and make a mistake in putting their illegality online.
So I would ask US Lawmakers why cannot the phone companies make the same commitments? As the reason seems to be we have bad people doing crime using digital communication devices.
Last time I checked the digital pipeline ie phone lines is still under FFC rules is it not?
If they answer that its to hard tech wise then why cannot Apple make the same argument ot law makers?
Teens are also children. Apple has no business checking if they send or receive nude pics. Let alone tell their parents. This is very creepy behavior from Apple.
The fact that teens are children means that if, say a 16-yo sends a nude selfie to their s.o., they've just committed a felony (distributing child pornography) that can have lifelong consequences (thanks to hysterical laws about sex offender registries, both kids could end up having to register as sex offenders for the rest of their life and will be identified as having committed a crime that involved a minor. Few if any of the registries would say more than this and anyone who looks in the registry will be led to believe that they molested a child and not shared a selfie or had one shared with them). The laws may not be just or correct, but they are the current state of the world. Parents need to talk to their kids about this sort of thing, and this seems one of the less intrusive way for them to discover that there's an issue. If it were automatically shared with law enforcement? That would be a big problem (and a guarantee that my kids don't get access to a device until they're 18), but I'm not ready¹ to be up in arms about this yet.
1. I reserve the right to change my mind as things are revealed/developed.
> they've just committed a felony (distributing child pornography)
In the US, maybe (not sure if this is even true in all states), but not in most other countries in the world, where a 16-year-old is not a child, nudity is not a problem, and "sex offender registries" don't exist.
The US is entitled to make its own (crazy, ridiculous, stupid) laws, but we shouldn't let them impose those on the rest of us.
Yet for the largest part this is where we are ending up. Just look at Facebook and Twitter deciding what is right and wrong. I think that's wrong in a lot of ways but appearantly there is very little the EU and others can do about it.
I’d argue that the problem of minors declared sex offenders for nude pictures has reached a critical mass that scares me. At this point, sex offenders of truly vile things can hide by saying that they are on a sex offender registry because of underage selfies. And I think most people will believe them.
As I understand it, one of the primary purposes of sex offender registries is to get information out about who is on them. I believe some people are forced to knock on doors in their neighbourhood and disclose it. In other situations they would just be getting ahead of the story.
They wouldn't. But if they were sex offenders, they could claim that their offense was simply sending a nude when they were 16. While their real offense may have been rape.
I don't actually know what the law is in regard to sex offense. I'm simply explaining what I understood from the previous comment.
> if, say a 16-yo sends a nude selfie to their s.o., they've just committed a felony ... The laws may not be just or correct, but they are the current state of the world.
Hence, strong E2E encryption designed to prevent unjust government oppression, without backdoors.
Parents should talk to their teenagers about sex regardless of if they get a notification on their phone telling them they missed the boat.
I get your points, but the end result is that the client in an E2EE system can no longer be fully trusted to act on the clients behalf. That seems alarming to me.
> Apple has no business checking if they send or receive nude pics. Let alone tell their parents.
Some people might disagree with you.
There are people out there who are revolted by the "obviously okay" case of 2 fully-consenting teenagers sending each other nude pics, without any coercion, social pressure, etc.
Not to mention all the gray areas and "obviously not okay" combinations of ages, circumstances, number of people involved, etc.
So this gives kids a heads up that they shouldn't send it, and that if they do, their parent will be notified. So that's me in case you are not reading this clearly.
Now yes, if someone is SENDING my child porn from a non-child account, I as the parent will be notified. Great.
If this is terrifying - that's a bit scary! HN is going a bit off the rails these days.
Allow me to make a prediction - users are going to like this - it will INCREASE their trust in apple in terms of a company trying to keep them and their family safe.
I just looked it up - Apple is literally the #1 brand globally supposedly in 2021. So they are doing the right thing in customers minds so far.
"it will be the other kids’ parent who will decide for your kid. Apple will give your children’s picture to the other kid’s parents."
This is an absolute falsehood. Literally a total flat lie. Have people not read the paper?
Apple does not give android phone users the child porn. They block it on device. So no, it will not be the other children's parents who will decide it will be me if my child is sending it.
And if someone sends my child a photo of child porn etc, then apple will block it and will notify me and that is also fine - if you are sending my child child porn - then I have a right to know - and I will pursue you aggressively as almost any parent will I think.
The idea that apple is doing something wrong here is ridiculous. The lying about what apple is doing is also ridiculous.
Presumably the sender won't get a notification that the picture they are about to send will be flagged, but it will be flagged by the recipient's device automatically.
Neither participant will have the opportunity to be warned in advance and avoid getting flagged.
Children as defined by Apple differs per legal region, for the US it is set to 13 years or younger. Also, your parents need to have added your account to the iCloud family for the feature to work.
Wanting to know as a parent, and the way Apple is going about this are two different issues.
The government also wants to know about potential terrorist attacks. Why not scan all our phones for all kinds of data to protect innocent people from being killed by mass shootings?
That's nonsense. I'm saying that and I'm deeply locked in Apple's ecosystem. Which is pissing me off.
> Parents can’t inform their kids if they aren’t aware
Why not inform your children of the potential consequences when giving them a phone? Why do you need Apple to notify you of inappropriate behavior before having that conversation? That's like waiting until you find a pregnancy test in the garbage before talking to your children about sex.
> Apple has no business checking if they send or receive nude pics.
Furthermore, if Apple is deliberately, by design resending them to additional people beyond the addressee, as a feature it markets to the people that will be receiving them, that seems to...raise problematic issues.
> In these new processes, if an account held by a child under 13 wishes to send an image that the on-device machine learning classifier determines is a sexually explicit image, a notification will pop up, telling the under-13 child that their parent will be notified of this content. If the under-13 child still chooses to send the content, they have to accept that the “parent” will be notified, and the image will be irrevocably saved to the parental controls section of their phone for the parent to view later. For users between the ages of 13 and 17, a similar warning notification will pop up, though without the parental notification.
This specifically says that it will not notify the parents of teens, as GGP claims. So GP is right that Apple isn't doing what GGP claimed. However I still think you might be right that GP didn't read the full article and just got lucky. Lol.
You're conflating the CSAM detection of photos uploaded to iCloud with the explicit detection for child devices. The latter is loosely described here: https://www.apple.com/child-safety/.
> Messages uses on-device machine learning to analyze image attachments and determine if a photo is sexually explicit. The feature is designed so that Apple does not get access to the messages.
That’s only the first part of what was announced and addressed in the article.
The other part is on-device scanning for nude pics a child is intending to send using machine learning and securely notifying the child, and then parents within the family account. The alert that the kids get by itself will probably be enough to stop a lot of them from sending the pic in the first place.
If you are talking about parents giving their children a device to use as a weapon against them, you are implying some really bad parenting and really bad general human behavior by some other parents.
Not that there aren't some really lousy parents out there.
I suspect/hope the exclusion parenting style (you don't get a phone because I don't trust you with it or the rest of the world with you) and the guidance parenting style (you get a phone but I'm going to make sure to talk to you about responsibility and about lousy people in the world, and I want to know if something happens so we can keep talking) both far outweigh this sort of proposed 'entrapment' parenting style.
Good lord - people have scary parenting styles - "information weapons"???
A phone is a tool. You start slowly. Do you give a 3 year old a chainsaw or machine gun right away? No.
I grew up and learned to shoot. I started with a bb gun, then a 22lr rife etc. You start with stuff less dangerous.
A phone potentially exposes children to all sorts of crap.
It's not some horrible thing to start slowly with something a bit locked down (and yes, blocking porn is part of that), then it opens up as they get mature.
This conversation has really made me look at the HN community in a different way. I've worked with kids in lots of contexts and with parents etc - the HN feelings about apple's efforts in this area are going to be somewhat extreme outliers relative to population as a whole and particularly vis a vis parents.
It seems like the "guiding parent" style is acceptable to you.
And yes, a phone - and the web and computing in general - are information weapons. Same as a knife or a hammer they have utility too which is great and also something to be lauded. But dangers must be recognised.
I won't trust Apple or Google to draw the line in the sand for me. Both the kid and myself will know what the expectations are and be talking about it the whole way through growing up.
I agree with you in principle, but I also know that kids will soon find methods of sharing that defeat any scans. Other apps and ephemeral websites can be used to escape Apple’s squeaky-clean version of the world.
But if I'm picking a phone for my kid, and my choice is this (even if imperfect) and the HN freedomFone - it's going to be Apple. We'll see what other parents decide.
@philistine is right that apple solution can only worth for short time being. Parents want easy automatic solution with "out of sight out of mind", but the only solution that will work long-term is talking with kid and educating them instead of out-sourcing parenting to tech company.
It's easily to design solution that will work around and allow sending either criminals or kids nude pictures under the radar, e.g.
1) Someone will eventually write web app that will use webrtc or similar for snapping nude picture (so that no picture is stored on device)
2) encrypting those photos on the server
3) sending the link to this nude image that will be rendered in html canvas (again no image will be stored on device)
4) link to web app that will rendered image will be behind captcha so that automated bots cannot scan it
Now do we want to go into the rabbit hole to 'protect the kids' and make Apple buildin camera drivers that will filter all video stream for nudity?
> the only solution that will work long-term is talking with kid and educating them instead of out-sourcing parenting to tech company.
Sometimes otherwise good, smart kids, with good parents, can be caught up by predation. It isn't always as simple as making sure the kid is educated; they'll still be young, they'll still be immature (< 13!), and it will sometimes be possible to manipulate them. I'm not saying that what Apple is doing is a good answer, it probably isn't, but it's an answer to a genuine felt need.
Under 13 seems like an ok cutoff but I’d be very concerned that they push it to under 18. Pretty much everyone is sharing adult context responsibly at 17.
Nude pic ID is routine online. Facebook developed this capability over 5 years ago and employs it liberally today, as do many other net service providers.
Parents do have a legal and moral responsibility to check on their children’s behaviour, and that includes teens. It’s somewhat analogous to a teacher telling parents about similar behaviour taking place at school.
I suspect a lot of how people feel about this will come down to whether they have kids or not.
You describe it as if Apple's got people in some room checking each photo. It's some code that notifies their parents in certain situations. ;P
I know several parents in just my extended circle alone that would welcome the feature, so... I just don't think I agree with this statement. These parents already resort to other methods to try and monitor their kids but it's increasingly (or already) impossible to do so.
I suppose we should also take issue with Apple letting parents watch their kids location...?
Since nobody would ever object to it, protecting against child abuse gets used as a wedge. As the article points out, the way this story ends is with this very backdoor getting used for other things besides preventing child abuse: anything the government asks Apple to give them. It's an almost inevitable consequence of creating a backdoor in the first place, which is why you have to have a zero-tolerance policy against it.
My big issue is what it opens up. As the EFF points out, it's really not a big leap for oppressive governments to ask Apple to use the same tech (as demoed by using MS's tech to scan for "terrorist" content) to remove content they don't like from their citizens' devices.
That's my concern: what happens the first time a government insists that they flag a political dissident or symbol? The entire system is opaque by necessity for its original purpose but that seems to suggest it would be easy to do things like serve a custom fingerprints to particular users without anyone being any the wiser.
My heart goes to the queer community of Russia, whose government will pounce on this technology in a heartbeat and force Apple to scan for queer content.
They’d have many other countries keeping them company, too.
One big mess: how many places would care about false positives if that gave them a pretext to arrest people? I do not want to see what would happen if this infrastructure had been available to the Bush administration after 9/11 and all of the usual ML failure modes played out in an environment where everyone was primed to assume the worst.
First, standard disclaimer on this topic that there were multiple independent technologies announced - I assume you are speaking to content hash comparisons on photo upload specifically to Apple's photo service, which they are doing on-device vs in-cloud.
How is this situation different from an oppressive government "asking" (which is a weird way we now use to describe compliance with laws/regulations) for this sort of scanning in the future?
Apple's legal liability and social concerns would remain the same. So would the concerns of people under the regime. Presumably the same level of notification and ability of people to fight this new regulation would also be present in both cases.
Also, how is this feature worse than other providers which already do this sort of scanning on the other side of the client/server divide? Presumably Apple does it this way so that the photos remain encrypted on the server, and release of data encryption keys is a controlled/auditable event.
You would think the EFF would understand that you can't use technical measures to either fully enforce or successfully defeat regulatory measures.
> It's run _before upload to iCloud Photos_ - where it would've already been scanned anyway
Right, so ask yourself, why is it on the device? Why not just scan on the server?
To me (agreeing with much of the commentary I’ve seen) the likeliest answer is that they are confining the scan to pre uploads now not for any technical reason but to make the rollout palatable to the public. Then they’re one update away from quietly changing the rules. There’s absolutely no reason to do the scan on your private device if they plan to only confine this to stuff they could scan away from your device.
> - It's run _before upload to iCloud Photos_ - where it would've already been scanned anyway, as they've done for years (and as all other major companies do).
Then why build this functionality at all? Why not wait until it's uploaded and check it on their servers and not run any client side code? This is how literally every other non-encrypted cloud service operates.
I assume (and this is my opinion, to be ultra-clear) that it's a blocker for E2E encryption. As we've seen before, they wanted to do it by backed off after government pressure. It wouldn't surprise me if this removes a blocker.
Apple has shown that they prefer pushing things to be done on-device, and in general I think they've shown it to be a better approach.
From what I remember iCloud is only encrypted at rest but not E2E. Apple can decrypt it anytime.
The password manager (Keychain) is the only fully encrypted part of iCloud; If you lose your devices or forget the main password, the manager will empty itself. This does not happen with any other part of iCloud.
That really makes little to no sense - it's not E2EE if you're going to be monitoring files that enter the encrypted storage. That's snakeoil encryption at that point.
I sincerely doubt Apple is planning to do E2EE with iCloud storage considering that really breaks a lot of account recovery situations & is generally a bad UX for non-technical users.
They're also already scanning for information on the cloud anyway.
Eh, I disagree - your definition feels like moving the goalposts.
Apple is under no obligation to host offending content. Check it before it goes in (akin to a security checkpoint in real life, I guess) and then let me move on with my life, knowing it couldn't be arbitrarily vended out to x party.
Any image that would trigger _for this hashing aspect_ would already trigger _if you uploaded it to iCloud where they currently scan it already_. Literally nothing changes for my life, and it opens up a pathway to encrypting iCloud contents.
Feel free to correct me if I'm wrong, but this is a method for decrypting _if it's matching an already known or flagged item_. It's not enabling decrypting arbitrary payloads.
From your link:
>In particular, the server learns the associated payload data for matching images, but learns nothing for non-matching images.
Past this point I'll defer to actual cryptographers (who I'm sure will dissect and write about it), but to me this feels like a decently smart way to go about this.
Your data should be encrypted on Apple's servers and unreadable by them; rather, this is my desire from Apple. They are likely bound to scan and detect for this kind of abusive content.
This handles that client-side instead of server side, and if you don't use iCloud photos, it doesn't even affect you. If syncing? Sure, decrypt it on device and check it before uploading - it's going to their servers after all.
Don't want to even go near this? Don't use Message or iCloud, I guess. Very possible to use iOS/iDevices in a contained manner.
There are always scenarios that one cannot catch. EFF highlights one such.
It sounds like it could be quite common. And it could be an absolute nightmare scenario for the kid who does not have the feature turned on.
This means that if—for instance—a minor using an iPhone without these features turned on sends a photo to another minor who does have the features enabled, they do not receive a notification that iMessage considers their image to be “explicit” or that the recipient’s parent will be notified. The recipient’s parents will be informed of the content without the sender consenting to their involvement. Additionally, once sent or received, the “sexually explicit image” cannot be deleted from the under-13 user’s device.
Now it will be "before upload". In 1-2 years it's "scan all local photos" in the name of "make the World a better place". It's such a small technical step for Apple to change this scanning behaviour in the future and scan even offline photos. All the necessary software is on all Apple i-devices already by then.
Everybody is a potential criminal with photos on your phone unless you prove otherwise by scanning. This is the future we are heading to. To do the scanning on device is actually the weakest point of their implementation IMHO.
This seems even worse. If the images are only scanned before upload to iCloud then Apple has opened a backdoor that doesn’t even give them any new capability. If I am understanding this right an iPhone can still be used to distribute CSAM as long as the user is logged out of iCloud? So it’s an overreach and ineffective?
> Expansion of the tech would be something I'd be more concerned about
Yeah, and that’s precisely what will happen. It always starts with child porn, then they move on to “extremist content”, of which the term expands to capture more things on a daily basis. Hope you didn’t save that “sad Pepe” meme on your phone.
It runs on my device and uses my CPU, battery time and my network bandwidth (to download/upload the hashes and other necessary artifacts).
I'd be fine with them scanning stuff I uploaded to them with their own computers because I don't have any really expectation of privacy from huge corporations.
I feel like this argument really doesn't add much to the discussion.
It runs only on a subset of situations, as previously noted - and I would be _shocked_ if this used more battery than half the crap running on devices today.
Do you complain that Apple runs code to find moments in photos to present to you periodically...?
What is the point of running this on device? The issue here is now Apple has built and is shipping what is essentially home-phoning malware that can EASILY be required with a court order to do something entirely than what it is designed to do.
They're opening themselves to being forced by 3 letter agencies around the world to do some really fucked up shit to their users.
Apple should never have designed something that allows for fingerprinting of files & users for stuff stored on their own device.
Not really, iOS didn't really have the capability of scanning and reporting files based on a database received by the FBI/other agencies.
There is a big difference when this has been implemented & deployed to devices. Fighting questionable subpoenas and stuff becomes easier when you don't have the capability.
Given iOS being totally proprietary on a heavily locked down device making inspecting even the binary blobs complicated, how can anyone can be sure what it is doing ? Not to mention any such capability missing now is just one upgrade away from being added with no ability for the user to inspect and reject it.
> I feel like this argument really doesn't add much to the discussion.
Oh, I guess I should have just regurgitated the Apple press release like the gp?
> It runs only on a subset of situations...
For now. But how does that fix the problem of them using my device and my network bandwidth?
> I would be _shocked_ if this used more battery than half the crap running on devices today.
You think you'll be able to see how much it uses?
> Do you complain that Apple runs code to find moments in photos to present to you periodically...?
Yes. I hate that feature, it's a waste of my resources. I'll reminisce when I choose to, I don't need some garbage bot to troll my stuff for memories. I probably already have it disabled, or at least the notifications of it.
It’s not just you. It’s fucking enraging at this point. I feel like I woke up one day and gleaned a fat look at Finder or various iCloud/background service junk and just realized it is to me what fucking bloatware ware of 2010 PC’s (and presumably today) was/is.
I just want general purpose computation equipment @ reasonably modern specifications - albeit largely devoid of rootUser-privileged advertisement stacks (included libraries etc).
I mean what the fuck, is that so fucking hard? This is hellworld, given the obviously plausible counter factual where we just… don’t… do this
As many, many people have pointed out, building a mechanism to scan things client-side is something which could easily be extended to encrypted content, and perhaps, is intended to be extended at a moment's notice to encrypted content, if they see an opportunity to do so.
It's like having hundreds of nukes ready for launch, as opposed to having the first launch being a year away.
If they wanted to "do it as all major companies do", then they could have done it on the server-side, and there wouldn't have been a debate about it at all, although it is still extremely questionable, as far as privacy is concerned.
The cynical take is that Apple was never committed to privacy in and of itself, but they are commited to privacy as long as it improves their competitive advantage, whether by marketing or by making sure that only Apple can extract value from its customers' data.
Hanlon's razor does not apply to megacorporations that have enormous piles of cash and employ a large number of very smart people, who are either entirely unscrupulous or for whom scruples are worth less than their salaries. We probably aren't cynical enough.
I am not arguing that we should always assume every change is always malicious towards users. But our index of suspicion should be high.
I've always been convinced that Apple cared about privacy as a way of competitive advantage. I don't need them to be committed morally or ethically, I just need them to be serious about it because I will give them my money if they are.
As soon as Cook became CEO, he let the NSA's Prism program into Apple. Everything since then has been a fucking lie.
> Andrew Stone, who worked with Jobs for nearly 25 years, told the site Cult of Mac last week that Steve Jobs resisted letting Apple be part of PRISM, a surveillance program that gives the NSA access to records of major Internet companies. His comments come amid speculation that Jobs resisted cooperating. “Steve Jobs would’ve rather died than give into that,” Stone told the site.
> According to leaked NSA slides about PRISM, Apple was the last tech behemoth to join the secret program — in October 2012, a year after Jobs died. Apple has said that it first heard about PRISM on June 6 of this year, when asked about it by reporters.
I mean, maybe they didn't call it "PRISM" when talking about it with Cook, so it could technically be true that they didn't hear of PRISM until media stories. Everyone knows the spy agency goes around telling all of their project code names to companies they're trying to compromise. Hello, sir. We're here to talk to you about our top secret surveillance program we like to call PRISM where we intercept and store communications of everyone. Would you like to join? MS did. So did Google. Don't you want to be in our select cool club?
Tim Cook doesn't Lie. I think he convinced himself what he said wasn't lying. That Apple and himself are so righteous. Which is actually worst, because that mentality filters through from Top to Bottom. And it is showing in their marketing and PR messages. He is also doing exactly Steve Jobs's last advice to him, Do the right thing. Except "the right thing" is so ambiguous it may turn out to be one of the worst advice.
My biggest turning point was Tim Cook flat out lying on the Apple case against Qualcomm. Double Dipping? Qualcomm patents being more than double than all the other six combined? And the tactics they used in court which was vastly different to Apple vs Samsung's case. And yes, they lost. ( Or settled )
That is the same with privacy. They simplifies their PR message as tracking = evil. Tracking is invading your privacy. Which is all good. But at the same time Apple is tracking you, everything you do on Apple Music, Apple TV+, App Store and even Apple Card. ( They only promise not to sell your Data to third party, they still have some of those Data. ). What that means is that only Apple is allowed to track you, but anyone else doing it are against privacy? What Apple really meant by the word Privacy then is that Data should not be sold to third parties. But no, they intentionally keep it unclear and created a war on Data Collection while they are doing it. And you now have people flat out claiming Apple doesn't collect any Data.
Then there is a war on Ads. Which was so bad the Ad industry pushes back and Tim Cook had to issue a mild statement saying they are not against Ads, only targeted Ads. What?
Once you start questioning all of his motives, and find concrete evidence that he is lying, along with all the facts from court case of how Apple has long term plans to destroy other companies, they all line up and shape how you view Tim Cook's Apple. And it isn't pretty.
And that is speaking from an Apple fan for longer than two decade.
What I want to know is why they decided to implement this. Are Apple just trying to appear virtuous and took action independently? Or was this done at someone else's request?
For all the rhetoric about privacy coming from Apple, I feel that such an extreme measure would surely cause complaints from anyone deeply invested in privacy. And maybe they're just using words like "significant privacy benefits compared to previous techniques" to make it sound reasonable to the average user who's not that invested in privacy.
> because the CEO seemed to be sincere in his commitment to privacy.
The sincerity of a company officer, even the CEO, should not factor into your assessment. Officers change over time (and individuals can change their stance over time), after all.
There was a funny, tiny thing that happened a few years back that made me think Tim Cook is a liar.
It was back when Apple had just introduced the (now-abandoned) Force Touch feature (i.e., pressure sensitive touch, since abandoned, since it turns out pushing hard on an unyielding surface is not very pleasant or useful).
To showcase the capability, Apple had updated many of its apps with new force-touch features. One of which was mail: if you pushed just right on the subject line of a message, you'd get a tiny, unscrollable popout preview of its contents.
It was totally useless: it took just as much time to force touch to see the preview as just normally tapping to view the message, and the results were less useful. It was also fairly fiddly: if you didn't press hard enough, you didn't get the preview; if you pressed too hard, it would open into the full email anyway.
So Tim Cook, demoing the feature, said a funny thing. He said, "It's great, I use it all the time."
Which maybe, just maybe, is true, but personally I don't believe, not for a second.
So since then, I've had Tim down in my book as basically a big liar.
I’m still waiting on iCloud backup encryption they promised a while back. There were reports that they scrapped those plans because the FBI told them to, but nothing official announced since 2019 on this.
So you mean the company that was part of PRISM, that has unfair business practices and a bully as a founder was not really the world savior their marketting speach said they were ?
I'm in shock. Multi-billion dollars company usually never lies to make money! And power grabbing entities have such a neat track record in human history.
Not to mention nobody saw that coming and told repeatadly one should not get locked into such a closed and proprietary ecosystem in the first place.
I mean, dang, this serial killer was such a nice guy. The dead babies in the basements were weird but appart from that he was a stellar neighbour.
Not really. This only applies to photos uploaded to iCloud. And photos uploaded to iCloud (and Google drive etc.) are already scanned on server for CP.
Apple is moving that process from on server to on phone in a way that protects your privacy better than current standards.
In the current system, all your photos are available to Apple unencrypted. In the new system, nothing will be visible to apple unless you upload N images with database hits. From those N tokens, Apple is then able to decrypt your content.
So when this feature lands, it improves your privacy relative to today.
nah. this is not how trust works. if Apple does stuff like this, I stop trusting Apple. Binary. Trust of No Trust.
Who is to say once they start doing this they will not extend their capabilities and monitor everything on the device? This is the direction we’re heading in.
For me this is my last iphone. And probably my last mac. the hardware is nice, shiny ams usable but you cannot do shit like this after you sell everyone on privacy.
What would a company that cares about privacy do? you don’t scan any of my things without explaining why and getting my consent. That’s privacy
They are literally telling you what they do with full disclosure, and have engineered a system to give you _more_ privacy than you have on any existing cloud photo provider today.
If you don't want to use cloud photo services because you don't like the implications, they are very upfront; disable iCloud photos. But every major cloud photo hosting service is doing this already on your images.
What you are missing is the behind the scenes pressure for DOJ, FBI and congress. Apple is trying their best to thread the needle, providing as much privacy as possible while plausibly covering their bases so congress won't pass onerous and privacy stripping laws.
> Apple is trying their best to thread the needle, providing as much privacy as possible while plausibly covering their bases so congress won't pass onerous and privacy stripping laws.
There is the problem right there. How can you tell that Apple is doing their best to preserve privacy vs doing their best to server their own interests?
If congress passes laws, those laws are in the open and we have a whole process dedicated to passing new laws. Here is something you maybe did not consider: passing those laws would be hard impossible without basically sacrificing any political capital the washed up political class still has in today's US (have you noticed how even the smallest issues is a big political issue and how congress doesn't seem to get much done nowadays?)
The 2nd part is that not the DOJ, FBI, NSA doing their job is the problem. The problem is mass surveillance. The same mass surveillance we have been subjected to keeps getting expanded to the point you will no longer be able to think anything else except what's approved by the people in power (if you think the thought police is a ridiculous concept, we are definitely heading that way).
Apple's shtick was that they cared about privacy. If they didn't do that dog and pony show maybe I would have written this off as "corporations being corporations". Now they don't get that.
> If you don't want to use cloud photo services because you don't like the implications, they are very upfront; disable iCloud photos.
You don't seem to get it. It's not about some fucking photos. Who cares. It's about what this open the door to. And about how this is going to be abused and expanded by "law enforcement".
In a world where you have shit like Pegasus and mass surveillance, you are one API call away from going to jail without even knowing what triggered it and being able to defend yourself. Probable cause? Fuck that. Everyone is guilty until proven innocent.
what happens when a government legally forces them to look for politically dissident content? they have already lost this fight, it is an inevitability
This year I purchased my first iPhone since the 3G, after today I am starting to regret that decision. At this point, I can only hope Linux on mobile picks up steam.
Because as other mentioned before its just few lines of code away to allow scanning any pictures that are not synced to any cloud. This just puts foundation for cheap, automatic mass surveillance.
Imagine if you were hired to design mass surveillance system. What kind of technical problems you would have to face?:
1) Trillions of pictures of altogether huge MB of data would have to be send to your servers to analyze - this creates huge network badwidth
2) You would need huge computing power to analyze those
3) This would probably easy to be detected
Apple solution would allow:
1) plausible deniability - it 'was just a bug that scanned all pictures instead of those that supposed to be in icloud'
2) cheap - using user cpu/gpu and smaller user bandwidth just to send hashes
3) less suspicions than having completely unknown process/daemon in the background because rolled as part of 'protect the children' campain
4) rolled out to one of the most popular mobile phone in US that has locked bootloader and OS cannot be downgraded, etc.
Great explanation.
It’s Apple way of decentralized image analysis with very little server cost and this is the most amazing idea I’ve seen for mass surveillance
It is secure, as long as you have nothing to hide. If you have no offending photos, then the data won't be uploaded! See, it's not nefarious at all! /s
Catching child pornographers should not involve subjecting innocent people to scans and searches. Frankly, I don't care if this "CSAM" system is effective - I paid for the phone, it should operate for ME, not for the government or law enforcement. Besides, the imagery already exists by the time it's been found - the damage has been done. I'd say the authorities should prioritise tracking down the creators but I'm sure their statistics look much more impressive by cracking down on small fry.
I've had enough of the "think of the children" arguments.
The algorithms and data involved are too sensitive to be discussed publicly and the reasoning is acceptable enough to even the most knowledgeable people. They can't even be pressured to prove that the system is effective at it's primary purpose.
This is the perfect way to begin opening the backend doors.
I agree with the rest of your points. The problem is that we don't know if Apple implemented this algorithm correctly, or even this algorithm at all, because the source code isn't subject to review and, even it were the binary cannot be proved to have been built from such source code. We also don't have proof that the only images being searched for are child abuse images as they claim.
Security by obscurity has never been particularly effective, and there are some articles which allege that detection algorithms can be defeated fairly easily.
We need to get organized first. We need a support platform where we can coordinate these type of actions. It's in my todo list, but if anyone can get this started please do so
There isn't any reason to believe the CSAM hash list is only images. The government now has the ability to search for anything in your iCloud account with this.
As it has to be. Because there's no defense against the possession of it, you don't want a situation where a person under 18 can take pictures of him or herself, send it to an adult unsolicited, and then call the police and not suffer any consequences.
That doesn't make any sense and it's not how it works in a number of other countries. You could (for example) make it illegal to send these pictures instead.
People own their bodies. Taking pictures of yourself, if you're a child, isn't child porn any more than touching yourself is molestation/assault.
Children don't need to be hit with "strict liability".
A person trying to frame someone else of a serious crime commits a serious offense, yes.
But that's a logically separate concept from the production or possession of child pornography, which that person must not be regarded as committing if the images are of him or herself.
The idiotic law potentially victimizes victims. A perpetrator can threaten the child into denying the existence of the perpetrator, and into falsely admitting to having taken pictures him or herself. It's exactly like taking the victims of human trafficking and charging them with prostitution, because the existence and whereabouts of the traffickers couldn't be established.
Whoever came up with this nonsense was blinded by their Bible Belt morality into not seeing the unintended consequences.
"CSAM" is an easy target because people can't see it - it would be wrong for you to audit the db because then you'd need the illicit content. So its invisible to the average law-abiders.
The work of Facebook's illicit media team has led to many, many prosecutions. They intentionally keep quiet about it because the reaction to a headline like "500-member Child Porn Ring busted on Facebook" isn't "Geez, I'm glad Facebook is keeping us safe," it's "Wow, maybe we shouldn't let our teenagers on Facebook" -- a reaction that significantly hurts their bottom line, and tips off the ChiPo folks besides.
Source: my own experiences in the criminal justice system and Chaos Monkeys, by Antonio Garcia-Martinez (a Y Combinator alum!).
>"the reaction to a headline like "500-member Child Porn Ring busted on Facebook" isn't "Geez, I'm glad Facebook is keeping us safe," it's "Wow, maybe we shouldn't let our teenagers on Facebook"
--
Exactly. Fuck facebook.
If they wanted more credibility it wouldn't be about "making the bottom line a more profitable place"
As opposed to the bullshit "making the world a better place"
I can tell that you're angry at Facebook. However, I don't really understand why. You're upset that they aren't taking more public credit? Perhaps this is a cultural difference, but I've never been exposed to a community where not taking credit violates social values. Help me understand?
There have certainly been busts in the media, including some depraved individuals who have blackmailed teenagers into sending them images, one of which set the dangerous precedent of tech companies developing exploits, and refusing to disclose them after the fact.
It isn't terribly surprising that a platform like Facebook, which has a lot of children on it, would end up attracting predators who seek to prey on them. Fortunately, Facebook has been deploying a number of tools to improve their safety over the past few years which don't rely on surveillance or even censorship.
Statistically, there have been a number of arrests which have been a product of their activities, although I don't have much info on those. Someone else may.
The real question is whether it is worth sacrificing everyone's privacy, so that a few people can be arrested.
I can imagine iCloud being a lower risk platform than Facebook. Someone can't really groom someone into uploading photos, although the existence of such images is still very condemnable.
Didn’t they [Apple] make the same points that EFF is making now, to avoid giving FBI a key to unlock an iOS device that belonged to a terrorist?
“ Compromising the security of our personal information can ultimately put our personal safety at risk. That is why encryption has become so important to all of us.”
“… We have even put that data out of our own reach, because we believe the contents of your iPhone are none of our business.”
“ The FBI may use different words to describe this tool, but make no mistake: Building a version of iOS that bypasses security in this way would undeniably create a backdoor. And while the government may argue that its use would be limited to this case, there is no way to guarantee such control.”
I really love the EFF, but I also believe the immediate backlash is (relatively) daft. There is a potential for abuse of this system, but consider the following too:
1. PhotoDNA is already scanning content from Google Photos and a whole host of other service providers.
2. Apple is obviously under pressure to follow suit, but they developed an on-device system, recruited mathematicians to analyze it, and published the results, as well as one in-house proof and one independent proof showing the cryptographic integrity of the system.
3. Nobody, and I mean nobody, is going to successfully convince the general public that a tool designed to stop the spread of CSAM is a "bad thing" unless they can show concrete examples of the abuse.
For one and two: given the two options, would you rather that Apple implement serverside scanning, in the clear, or go with the on-device route? If we assume a law was passed to require serverside scanning (which could very well happen), what would that do to privacy?
For three: It's an extremely common trope to say that people do things to "save the children." Well, that's still true. Arguing against a CSAM scanning tool, which is technically more privacy preserving than alternatives from other cloud providers, is an extremely uphill battle. The biggest claim here is that the detection tool could be abused against people. And that very well may be possible! But the whole existence of NCMEC is predicated on stopping the active and real danger of child sex exploitation. We know with certainty this is a problem. Compared to a certainty of child sex abuse, the hypothetical risk from such a system is practically laughable to most people.
So, I think again, the backlash is daft. It's been about two days of the announcement being public (leaks). The underlying mathematics behind the system has barely been published [0]. It looks like the EFF rushed to make a statement here, and in doing so, it doesn't look like they took the time to analyze the cryptography system, to consider the attacks against it, or to consider possible motivations and outcomes. Maybe they did, and they had advanced access to the material. But it doesn't look like it, and in the court of public opinion, optics are everything.
> that a tool designed to stop the spread of CSAM is a "bad thing"
It's certainly said to be designed to do it, but have you seen concerns raised in the other thread (https://news.ycombinator.com/item?id=28068741)? There have been reports from some commenters of the NCMEC database containing unobjectionable photos because they were merely found in a context alongside some CSAM.
Who audits these databases? Where is the oversight to guarantee only appropriate content is included? They are famously opaque because the very viewing of the content is illegal. So how can we know that they contain what they are purported to contain?
> Who audits these databases? Where is the oversight to guarantee only appropriate content is included? They are famously opaque because the very viewing of the content is illegal. So how can we know that they contain what they are purported to contain?
I wholeheartedly agree: there is an audit question here too. The contents of the database are by far the most dangerous part of this equation, malicious or not, targeted or not. I don't like the privacy implications about this, nor the potential for abuse. I would love to see some kind of way to audit the database, or ensure that it's only used "for good." I just don't know what that system is, and I know that PhotoDNA is already in use on other cloud providers.
Matthew Green's ongoing analysis [0] is really worth keeping an eye on. For example, there's a good question: can you just scan against a different database for different people? These are the right questions given what we have right now.
Do the people authorized to query the database have access to view its contents?
How many people in the world can certify that it only contains CSAM? I wonder if there are images of US troops committing warcrimes, or politicians doing cocaine, or honeypot intel op blackmail images in there too. Lots of powerful people would love to have an early warning system for leaks of embarrassing non-CSAM images.
Matt should read the release before live tweeting FUD. The database is shipped in the iOS image, per the overview, so targeting users is not an issue (roughly).
Is the database frozen or can they push out updates independently of iOS updates? If not, targeting individual users definitely doesn't seem possible unless you control OS signing.
The technical summary provides a lot of detail. I don’t think Apple would omit remote update functionality from it if such capability existed, especially since database poisoning is a real risk to this type of program. I’m comfortable with interpreting the lack of evidence as evidence of absence of such a mechanism. Explicit clarification would certainly help though, but my original point stands: there is positive evidence in the docs which the FUD tweets don’t engage with.
In particular, I’m referencing the figure which says that the database of CSAM hashes is “Blinded and embedded” into the client device. That does not sound like an asset the system remotely updates.
I agree database poisoning is a legitimate threat! Including the database in an iOS release (so it can’t be targeted and updated out of band) mitigates it somewhat. At the end of the day, though, more should be done to make NCMEC’s database transparent and trustworthy. And other databases too, if Apple decides to ship country-specific blacklists.
I personally don't believe this process can be made to be trustworthy enough while still serving its stated purpose. It will always remain opaque enough that it could and will be used to violate civil rights.
Has Matt redacted any of the FUD from his tweets last night which aren’t true given the published details from today? For example, his claim that the method is vulnerable to black box attacks from GANs isn’t applicable to the protocol because the attacker can’t access model outputs.
Furthermore, if “an easy to change implementation detail” in your threat model is anything which could be changed by iOS update, you should’ve stopped using iPhone about 14 years ago.
That’s a problem with NCMEC, not Apple’s proposal today. Furthermore, if it were an actual problem, it would’ve already manifested with the numerous current users of PhotoDNA which includes Facebook and Google. I don’t think the database of known CSAM content includes photos that cannot be visually recognized as child abuse.
Why do you not think that? As far as I understand, there is no procedure for reviewing the contents, it is simply a database that law enforcement vouches is full of bad images.
NCMEC, not law enforcement, produces a list of embeddings of known images of child abuse. Facebook and Google run all photos uploaded to their platforms against this list. Those which match are manually reviewed and if confirmed to depict such scenes, are reported to CyberTip. If the list had a ton of false positives, you think they wouldn’t notice that their human reviewers were spending a lot of time looking at pictures of the sky?
It's well-known that this algorithm doesn't have a perfect matching rate. It'd be easy to presume that any false positives are not erroneously tagged images, but the error rate of the underlying algorithm, if all the images were tagged correctly. Who would know?
IIRC Wired reported the algorithm "PhotoDNA" worked around 99% of the time a number of years ago, however newer algorithms may be fuzzier. This is not the same algorithm. And even "PhotoDNA" appears to change over time.
I doubt reviewers of such content are at a liberty to discuss what they see or don't with anyone here. Standard confidentiality agreements.
How do we know it hasn't? Maybe the people who have seen these situations have received to national security letters that claim to prevent them from even talking to a lawyer about the non-CSAM classified images they've seen in the course of the investigation?
"Reports from commenters" = unsubstantiated speculation. Weird how no one was able to specifically state any information about these unobjectionable photos except for a theoretical mechanism for them to find their way into the database.
Yes, of course, it's "unsubstantiated speculation", but how do you suppose the speculation be substantiated if the entire process and database are not available for audit? That's exactly the problem with this.
That isn't specific information - that is more speculation.
I expect people who make positive claims (the database contains innocuous images) to be able to back up those claims. Saying that there are innocuous images in the database is completely different from saying there may be innocuous images in the database.
It would be like if I claimed that there is a burglar in your house - and how do I know? Well all a burglar would have to do is break your window and crawl in.
You presume Apple and the DoJ will implement this with human beings at each step. They won't. Both parties will automate as much of this clandestine search as possible. With time, the external visibility and oversight of this practice will fade, and with it, any motivation to confirm fair and accurate matches. Welcome to the sloppiness inherent in clandestine law enforcement intel gathering.
As with all politically-motivated initiatives that boldly violate the Constitution (consider the FISA Court, and its rubber stamp approval of 100% of the secret warrants put before it), the use and abuse of this system will go largely underground, like FISA, and its utility will slowly degrade due to lack of oversight. In time, even bad matches will log the IDs of both parties in databases that label them as potential sexual predators.
Believe it. That's how modern computer-based gov't intel works. Like most law enforcement policy recommendation systems, Apple's initial match algorithm will never be assessed for accuracy, nor be accountable for being wrong at least 10% of the time. In time it will be replaced by other third party screening software that will be even more poorly written and overseen. That's just what law enforcement does.
I've personally seen people suffer this kind of gov't abuse and neglect as a result of clueless automated law enforcement initiatives after 9-1-1. I don't welcome more, nor the gradual and willful tossing of everyone's basic Constitutional rights that Apple's practice portends.
The damages to personal liberty that are inherent in conducting secret searches without cause or oversight is exactly why the Fourth Amendment requires a warrant before conducting a search. NOW is the time to disabuse your sense of 'daftness'; not years from now, after the Fourth and Fifth Amendments become irreversibly passe. Or should I say, 'daft'?
> recruited mathematicians to analyze it, and published the results, as well as one in-house proof and one independent proof showing the cryptographic integrity of the system.
Apple employs cryptographers, but they are not necessarily acting in your interest. Case in point: their use of private set intersection, to preserve privacy..of law enforcement, not users. Their less technical summary:
> Instead of scanning images in the cloud, the system performs on-device matching using a database of known CSAM image hashes provided by NCMEC and other child safety organizations. Apple further transforms this database into an unreadable set of hashes that is securely stored on users’ devices.
> Before an image is stored in iCloud Photos, an on-device matching process is performed for that image against the known CSAM hashes. This matching process is powered by a cryptographic technology called private set intersection..
The matching is performed on device, so the user’s privacy isn’t at stake. But, thanks to PSI and the hash preprocessing, the user doesn’t know what law enforcement is looking for.
Well, it’d be kind of dumb to make the mistake of building a system to stop child pornography only to have it become the biggest distributor of CP photos in history
Those images are hashed, not transmitted in original format. On top of that, PSI prevents you from learning those hashes, or how many there are. So you can’t tell if the database contains the hash of, say, tank-man.jpg.
I understand why this shielding is necessary for the system to work. My point is the crypto is being used to protect law enforcement, not the user.
And my point is that the only way to provide visibility over what is being looked without distributing the material would be to implement some type of ZKP
> There is a potential for abuse of this system, but consider the following too
> I think again, the backlash is daft.
Don't apologize for this bullshit! Don't let your love of brand trump the reality of what's going on here.
Machinery is being put in place to detect what files are on your supposedly secure device. Someone has the reins and promises not to use it for anything other than "protecting the children".
How many election cycles or generations does it take to change to an unfavorable climate where this is now a tool of great asymmetrical power to use against the public?
What happens when the powers that be see that you downloaded labor union materials, documents from Wikileaks, or other files that implicate you as a risk?
Perhaps a content hash on your phone puts you in a flagged bucket where you get pat downs at the airport, increased surveillance, etc.
The only position to take here is a full rebuke of Apple.
edit: Apple apologists are taking a downright scary position now. I suppose the company has taken a full 180 from their 1984 ad centerpiece. But that's okay, right, because Apple is a part of your identity and it's beyond reproach?
edit 2: It's nominally iCloud only (a key feature of the device/ecosystem), but that means having to turn off a lot of settings. One foot in the door...
edit 3: Please don't be complicit in allowing this to happen. Don't apologize or rationalize. This is only a first step. We warned that adtech and monitoring and abuse of open source were coming for years, and we were right. We're telling you - loudly - that this will begin a trend of further erosion of privacy and liberty.
It's not doing any sort of scanning of your photos while they're just sitting on your device. The CSAM scanning only occurs when uploading photos to iCloud, and only to the photos being uploaded.
It’s always feasible that they put a change in that does something users don’t want and invades hardware privacy. But this update is not an instance of that. It’s simply changing how we interact with their SaaS. We’re no further down some slippery slope.
I’d be interested to see what any Apple executives would respond to the concerns in interviews, but I don’t expect Apple to issue a press release on the concerns.
This is an abuse my property rights. The device is my property and this activity will be using my CPU, battery time and my network bandwidth. That's the abuse right there.
They should just use their own computers to do this stuff.
Then you have two choices, disable iCloud photo backups or don’t upgrade to iOS 15. There are plenty of arguments against Apple’s scheme, but this isn’t one of them.
It's the best argument. Every other argument is based on a slippery slope what-if scenario.
Also, I pay for iCloud... but they're not paying me for using my phone and bandwidth. I never agreed to that.
They can't just pull the rug out from me after we already have an agreement. I mean they can, because I probably "agreed" to it and some fine print garbage EULA, but those fall apart in a court of law.
> it looks like the thin end of a wedge, it sucks and I'm not happy about it.
But you just said that you can disable it. So, you can be upset about a possible future but I'm the one being melodramatic? That's a good one! You chose to run the OS. So by your logic that gives you no right to complain either.
I do understand that I can disable iCloud for my photos and I will do that. We'll see how long that lasts until they decide to tie the next feature and the next one and the next one to something that I don't like. Because that's how this works. Every time they do something that they know people won't like, they simply make it so that you lose access to something else if you decide to stand up for your principles.
I don’t know how true this is. I don’t see any way to block Photos from viewing the files on this device and I see no reason that it can’t read files from my other apps.
(2) is important. Apple put effort into making this at least somewhat privacy-respecting, while the other players just scan everything with no limit at all. They also scan everything for any purpose including marketing, political profiling, etc.
Apple remains the most privacy respecting major vendor. The only way to do better is fully open software and open hardware.
This isn't the biggest issue at play, but one detail I can't stop thinking about:
> If an account held by a child under 13 wishes to send an image that the on-device machine learning classifier determines is a sexually explicit image, a notification will pop up, telling the under-13 child that their parent will be notified of this content. [...] For users between the ages of 13 and 17, a similar warning notification will pop up, though without the parental notification.
Why is it different for children under 13, specifically? The 18-year cutoff makes sense, because turning 18 carries legal weight in the US (as decided via a democratic process), but 13?
13 is an age when many parents start granting their children more freedom, but that's very much rooted in one's individual culture—and the individual child. By giving parents fewer options for 13-year-olds, Apple—a private company—is pushing their views about parenting onto everyone else. I find that a little disturbing.
---
Note: I'm not (necessarily) arguing for greater restrictions on 13-year-olds. Privacy for children is a tricky thing, and I have mixed feelings about this whole scheme. What I know for sure, however, is that I don't feel comfortable with Apple being the one to decide "this thing we've declared an appropriate invasion of privacy for a 12-year-old is not appropriate for a 13-year-old."
13 isn't an arbitrary cut-off. It's established as law in the US under COPPA. Similar to how 18 is the cut off for the other features. Other countries may have different age ranges according to local laws.
18, to me, is different, because it's the point when your parents have no legal authority, so of course they shouldn't have any say over how you use your phone.
I know nothing about this, but I'm very surprised that any type of supervision—which is what this really is—would be legally unclear, for anyone who is a minor.
Yeah, the "your phone will check your personal files against an opaque, unauditable, government-provided database and rat you out if it gets a match" part of this is very concerning, but I don't buy the EFF's arguments against the new parental control features. End-to-end encrypted or not, if you're sending messages to a minor you should expect that their parents can read those messages.
Parents need to expressly opt in to Communication Safety when setting up a child's device with Family Sharing, and it can be disabled if a family chooses not to use it.
Parents cannot be notified when a child between the ages of 13 and 17 views a blurred photo, though children that are between those ages will still see the warning about sensitive content if Communication Safety is turned on.
I kind of agree with you there. Isn't it just another toggle? Surely if the entire feature is opt-in, the blurring could be a toggle, as well as the notification.
The article didn't say that these age differences were adjustable defaults, so I would presume not. If Apple put in these restrictions to protect the privacy of older children, making it adjustable would defeat the point.
(And by the way, I respect Apple's desire to protect children's privacy from their parents, but forcing the change at 13, for everyone, seems awfully prescriptive. It's fundamentally a parenting decision, and Apple should not be making parenting decisions.)
But if this can in fact be controlled by the parent regardless of the child's age, that does resolve the problem!
If Mallory gets a lawful citizen Bob to download a completely innocuous looking but perceptual-CSAM-hash-matching image to his phone, what happens to Bob? I imagine the following options:
- Apple sends Bob’s info to law enforcement; Bob is swatted or his life is destroyed in some other way. Worst, but most likely outcome.
- An Apple employee (or an outsourced contractor) reviews the photo, comparing it to CSAM source image sample used for the hash. Only if the image matches according to human vision, Bob is swatted. This requires there to be some sort of database of CSAM source images, which strikes me as unlikely.
- An Apple employee or a contractor reviews the image for abuse without comparing it to CSAM source, using own subjective judgement. Better, but implies Apple employees could technically SWAT Apple users.
Do we know that they are using perceptual hashing? I am curious about the details of the hash database they are comparing against, but I assumed perceptual hashing would be pretty fraught with edge cases and false positives.
e: It is definitely not a strict/cryptographic hash algorithm: "Apple says NeuralHash tries to ensure that identical and visually similar images — such as cropped or edited images — result in the same hash." They are calling it "NeuralHash" -- https://techcrunch.com/2021/08/05/apple-icloud-photos-scanni...
Presuming iCloud Photos is enabled by Bob, an unsuspecting citizen, all downloaded images are synced to iCloud either right away or next time on Wi-Fi, depending on settings.
True. On top of that People should now start worrying if some person on whatsapp/telegram group will share some illegal file.
I have been part of few public hiking/travelling whatsapp/telegram groups. Many times I mute those group because I don't want to be notified and distracted with every single message. All those photos someone share in any whatsapp group will end up in your iphone 'Recent' album automatically even if you muted some group many months ago and didn't check at all.
Well, at least this feature could be disabled in WhatsApp settings as far as I know.
Either way, since my original post I’ve read that the way Apple does it is by reviewing the offending material in-house first before notifying any third party (like the government), meaning it wouldn’t be as easy to SWAT a random person just like that.
> Apple says that it will manually review each report to confirm there is a match. It can then take steps to disable a user's account and report to law enforcement.
Police routinely get drug sniffing dogs to give false positives so that they are allowed to search a vehicle.
How do we know Apple or the FBI don’t do this? If they want to search someone’s phone all they need to do is enter a hash of a photo they know is on the targets phone and voila, instant access.
Also, how is this not a violation of the 14th amendment? I know Apple isn’t part of the government but they are basically acting as a defacto agent of the police by scanning for crimes. Using child porn as a completely transparent excuse to start scanning all our material for anything they want makes me very angry.
Because it requires Apple and law enforcement, two separate organizations, to collude against you.
The false positive would have to be affirmed to a court and entered into evidence. If the false positive we’re found to not match the true image by the court, any warrant etc. would be found invalid and the fruit of any search etc would be invalid as well.
Apple is a private company. By agreeing to use iCloud photos you agree to their terms, this no 14th amendment violation.
> Because it requires Apple and law enforcement, two separate organizations, to collude against you.
Does it really? As I understand it, the thing is pretty one-sided. Who manages and governs the collection of 'hashes'? If it's law enforcement there's no collusion needed. Also, someone can just text you such a photo, or some 0-day exploiting malware (of which governments have a bunch) would plant one on your phone.
> The false positive would have to be affirmed to a court and entered into evidence. If the false positive we’re found to not match the true image by the court, any warrant etc. would be found invalid and the fruit of any search etc would be invalid as well.
All of this would happen after you're arrested, labeled a pedo and have your life turned upside down. All of which can be used to coerce a suspect into becoming an informant, plead guilty to some unrelated charge or whatever. This type of thing opens the door to a whole new world of abuse.
I do not have faith in our legal system. I have faith that, if Apple wanted to build a system to help frame its users, it could do so at anytime and it certainly wouldn't advertise it.
LE only gets involved when Apple makes a determination that 1) you have a high number of hash collisions for exploitative material and 2) those images correspond to actual exploitative material.
So if you are innocent, Apple would have to make the decision to screw you over and frame you. And... they would have to manufacture evidence against you since investigators need actual evidence to get warrants. Hash collisions are not evidence. They can do this today if they want. Apple can simply add illegal content to your iCloud drive and then report you to LE. But they don't seem to be doing that.
The FT article mentioned it was US only, but I'm more afraid of how other governments will try to pressure Apple to adapt said technology to their needs.
Can they trust random government to give them a database of only CSAM hashes and not insert some extra politically motivated content that they deem illegal ?
Because once you've launched this feature in the "land of the free", other countries will require for their own needs their own implementation and demand (through local legislation which Apple will need to abide to) to control said database.
And how long until they also scan browser history for the same purpose ? Why stop at pictures ? This is opening a very dangerous door that many here will be uncomfortable with.
Scanning on their premises (considering they can as far as we know ?) would be a much better choice, this is everything but (as the "paper" linked tries to say) privacy forward.
The initial rollout is limited to the US, with no concrete plans reported yet on expansion.
“The scheme will initially roll out only in the US. […] Apple’s neuralMatch algorithm will continuously scan photos that are stored on a US user’s iPhone and have also been uploaded to its iCloud back-up system.”
Researchers interviewed for the article would agree with your analysis. “Security researchers [note: appears to be the named security professors quoted later in the article], while supportive of efforts to combat child abuse, are concerned that Apple risks enabling governments around the world to seek access to their citizens’ personal data, potentially far beyond its original intent.”
Thanks, after some fiddling I managed to finally read the full text from the article and it's definitely short on details on the rollout. Let's hope they rethink this.
I'm also fairly concerned about the neural part behind the name, which I hope is just (incredibly poor) marketing around the perceptive hash thing.
- Apple: Dear User, We are going to install Spyware Engine in your device.
- User: Are you out of your f... mind?
- Apple: It's for children protection.
- User: Ah, ok, no problem, please install spyware and do later whatever you wish
and forget about any privacy, the very basis of rights, freedom and democracy.
This is by the way how Russia started to filter the web from political opponents.
All necessary controls were put in place under the same slogan: "to protect children"
Yeah, right.
Are modern people that naive and dumb and can't think 2 steps forward? Is that's why it's happening?
Edit:
Those people would still need to explain how living in society without privacy, freedom and democracy with authoritarian practices when those children will grow up will make them any 'safer' ...
It’s pretty trivial to iteratively construct an image that has the same hash as another, completely different image if you know what the hash should be.
All one needs to do, in order to flag someone or get them caught up in this system, is to gain access to this list of hashes and construct an image. This data is likely to be sought after as soon as this system is implemented, and it will only be a matter of time before a data breach exposes it.
Once that is done, the original premise and security model of the system will be completely eroded.
That said, if this does get implemented I will be getting rid of all my Apple devices. I’ve already switched to Linux on my development laptops. The older I get, the less value Apple products have to me. So it won’t be a big deal for me to cut them out completely.
This seems dumb. I'm sure that sophisticated bad people will just alter colors and things to defeat the hashes and meanwhile trolls will generate collisions to cause people to falsely be flagged.
The perceptual hashes are specifically designed to correlate images that are visually similar while not being exactly alike, even when the colors are altered. See:
Yes, using cryptographic hashes prevents them from fuzzy matching and the algorithms that allow for efficient matching and fuzzy matching require comparing the bare hashes.
what is the hashing scheme? I assume it must not be a cryptographically secure hashing scheme if it's possible to find a collision. It's not something like sha256?
At this point, I think phones can be compared to a home in terms of privacy.
In your house, you might have private documents, do some things you don't want other people to have or see just like what we have on our phones nowadays.
The analogy I'm trying to make is that if suddenly the government decided to install cameras in every houses with the premise to make sure no pedophile is abusing a child and that the cameras never send data unless the AI done locally detects it is something that I believe would shock everyone.
> At this point, I think phones can be compared to a home in terms of privacy.
unfortunately the law hasn't really kept up with technology. Let's hope this gets in front of a judge who's able to extrapolate some 'digital' rights from the (outdated) constitution. Unless of course they also 'think of the children'.
its a good analogy that's useful for a lot of things. if someone was standing across the road from your house with a telescope, writing down every tv show or movie you watched, i think most people would be very angry about that. but when people hear they are being profiled online in the same way they are not bothered at all.
it doesn't help that most things online are very abstract, with terms like 'the cloud' making things even harder to understand, which in reality is just someone else's computer
wow. in the middle of reading that, i realized that this is a watershed moment. why would apple go back on their painstakingly crafted image and reputation of being staunchly pro privacy? its not for the sake of the children (lol). no, something happened that has changed the equation for apple. some kind of decisive shift has occurred. maybe apple has finally caved in to the chinese market, like everyone else in the US, and is now making their devices compatible with chinese surveillance. or maybe the US government has finally managed to force apple to crack open its shell of encryption in the name of a western flavored surveillance. but either way, i think it is a watershed moment because securing privacy will from this moment onward be a fringe occupation in the west. unless a competitor rises up -- but thats impossible because there arent enough people who care about privacy to sustain a privacy company. thats the real reason why privacy has died today.
if you really want to save the children, why not build the scanning into safari? scan the whole phone! just scan it all. its really no different than what they are doing. its not like they would have to cross the rubicon to do it, not anymore anyway.
and also i think its interesting how kids will adjust to this. i think a lot of kids wont hear about this and will find themselves caught up in a child porn case.
im so proud of the responses that people seem to generally have. it makes me feel confident in the future of the world.
isnt there some device to encrypt and decrypt messages with a separate device that couples to your phone? like a device fit into a case and that has a keyboard interface built into a screen protector with indium oxide electrodes.
If I go on 4chan and an illegal image loads and caches into my phone before moderators take it down or I hit the back button, will Apple’s automated system ruin my life?
This kind of stuff absolutely petrifies me because I’m so scared of getting accidentally scooped up for something completely unintentional. And I do not trust police one bit to behave like intelligent adult humans.
Right now I feel like I need to stop doing ANYTHING that goes anywhere outside the velvet ropes of the modern commercial internet. That is, anywhere that cannot pay to moderate everything well enough that I don’t run the risk of having my entire life ruined because some #%^*ing algorithm picks up on some content I didn’t even choose to download.
> If I go on 4chan and an illegal image loads and caches into my phone before moderators take it down or I hit the back button, will Apple’s automated system ruin my life?
No, only if you save multiple CSAM images to your photo library and have iCloud Photo Library turned on.
I don’t think it’s as trivial as you suggest, but yeah they could do that – they always could have made changes to flag crime or give access to criminal’s devices but they still refuse to do so.
The barrier for being accused by someone who matters is quite high, you’d need to breach a certain threshold of material. That material is then manually reviewed by a human and only then if it appears to be CSAM do they refer your details to the police. The police would then presumably also check the material is bad before arresting you and ceasing all your devices.
for 4chan maybe that's true but I'm not sure what about some public whatsapp group? I have been part of few public hiking/travelling group and even though I have most of them muted (to avoid distraction) all pictures end up in my Photos 'Recent' Album.
My two cents: I get the impression this is related to NSO pegasus software. So once the Israeli firms leaks were made public Appple had to respond and has patched some security holes that were exposed publicly.
NSO used exploits in iMessage to enable them to grab photos, texts among other things.
Now shortly after Apple security patches we see them pivot and now want to “work” with law enforcement. Hmmm almost like once access was closed Apple needs a way to justify “opening” access to devices.
Yes I realize this could be a stretch based on the info. Just seems like an interesting coincidence… back door exposed and closed…. now it’s back open… almost like governments demand access
I guess it doesn't matter, the smartphone is a tracking device by definition, they can track your movement with a dumb phone too, but there are much more possibilities in a device with recording capabilities and an internet connection. In Orwells '1984' they mandated the installation of a televisor tracking device, now they have one in every pocket, and so it goes that we traded privacy for convenience. It's a bit of an irony, that Apple started with the big brother commercial, and ended up bringing us the televisor. https://www.youtube.com/watch?v=zIE-5hg7FoA Just because opportunity for an exploit is creating the reality of using that exploit, it seems as if it is then used for it's intended purpose..
> Apple’s method of detecting known CSAM is designed with user privacy in mind. Instead of scanning images in the cloud, the system performs on-device matching [...]
It's incredible that Apple arrived at the conclusion that client-side scanning that you cannot prevent is more private than cloud-scanning.
Since they claim they're only scanning iCloud content, why not scan in the cloud?
They decided the most private way is to scan iCloud content before it's uploaded to the cloud... Because if they scanned in the cloud it would be seen as a breach of privacy and is bad optics for a privacy-focused company? But scanning on the physical device that they have described as "personal" and "intimate" has better optics? That's amazing.
This decision can only be read as Apple paving the way to scanning all content on the device, to bypass the pesky "Backup to iCloud" options being turned off.
> Since they claim they're only scanning iCloud content, why not scan in the cloud?
Because (I suspect) this is a precursor to E2EE encrypted iCloud Photos. Apple cannot plausibly claim it does not store malicious E2EE content on its servers without some kind of filter upon upload. This is that filter. Other services, including the current implementation of iCloud Photos, skate by because they do not allow E2EE photos.
I'm looking forward to this platform being expanded to facially ID against more databases such as criminals, political dissenters, or anyone with an undesirable opinion so that SWAT teams can barge into the homes of false positive identifications to murder them and their dogs.
One disappointing development from a larger perspective is that many privacy-preserving technologies (multi-party computing, homomorphic encryption, hardware enclaves, etc) are actually getting used to build tools that undermine once-airtight privacy guarantees. E2E starts to become… whatever this is.
A more recent example is how private set intersection became an easy way to get contact tracing tech everywhere while maintaining an often perfunctory notion of privacy.
I wonder where large companies will take this next. It behooves us cryptography/security people who actually care about not walking down this slippery slope to fight back with tech of our own.
This whole thing also somewhat parallels the previous uses of better symmetric encryption and enclaves technologies for DRM and copyright protection.
> As far as this is concerned, seems like if you don’t use iMessage or iCloud you’re safe for now.
Yes, this is correct. The Messages feature only applies to children under 18 who are in an iCloud Family, and the photo library feature only applies if you are using iCloud Photos.
Oh come on, you really think thats their big plan? Announcing the scanning SW in public and then abuse it? If they want to to illegal spying they do it right. And without a second Snowden you will not hear about it.
I don't think it has anything to do with age. It has everything to do with you adding the phone to your family under settings and declaring that it belongs to a child. You control the definition of child.
I could imagine an abusive partner enabling this to make sure their partner isn’t sexting other people. Given the pushback for AirTags I’m surprised people aren’t more concerned.
Anyone 13 or older can remove themselves from a family sharing group. The only exception is if screen time is enabled and enforced for their device.
Frankly, if you have an abusive partner with physical control over you and a willingness to do this, the fact that Apple supports this technology is the least of your problems.
I’m not sure I’m misunderstanding. This is another feature that allows someone with access to another person’s phone to enable stalkerware like features.
If you read the article, you'd understand that among ALL the issues, this is not one:
- Photos scanning in Messages is on-device only (no reporting to govt.) and doesn't turn on unless you're an adult who turns it on for a minor via Family Sharing controls.
- iCloud Photos scanning doesn't take effect unless you save the photo and it's already in a database of flagged photos. So in your scenario, you'd have to save the photo received from the unknown number to get flagged.
I'm confused - the article explicitly states this scenario - minus the swatting.
Ie unless you're replying to purely the swatting part, the article seems to support this. Specifically a prediction that governments will creep on legally requiring Apple to push custom classifiers:
> Apple’s changes would enable such screening, takedown, and reporting in its end-to-end messaging. The abuse cases are easy to imagine: governments that outlaw homosexuality might require the classifier to be trained to restrict apparent LGBTQ+ content, or an authoritarian regime might demand the classifier be able to spot popular satirical images or protest flyers.
That sentence is wrong. It simply isn't accurate of the current system. It relies on future changes to the system, not just changes to a database.
The iMessage feature is not a database comparison system, it's to keep kids from getting/receiving nudes unexpectedly – and it works based on classifying those images.
I don't dispute this is a slippery slope - one could imagine that a government requires Apple to modify it's classification system. However, that would presumably require a software update since it happens on device.
That refers to the icloud scanning, the idea being that if the hash database contains propaganda, people uploading that propaganda to icloud could get reported by their own device.
Didn’t apple also announce a feature for iOS 15 where iMessage photos are somehow automatically collected and shown in iCloud? A way to reduced hassle of creating shared albums. So with that, I think all users of iCloud photos are under risk here.
Y'know, I have no idea what I'd do in this situation and I really hope I'll never find out.
If a kilo of heroin just showed up in the back seat of my car, I'd throw it out the window and try not to think about it. I certainly wouldn't bring it to the police, because mere possession is a serious crime.
CP is the same way, except it comes with a nice audit trail which could sink me even if I delete it immediately. Do I risk that, or do I risk the FBI deciding I'm a Person of Interest because I reported the incident in good faith?
> Once a certain number of photos are detected, the photos in question will be sent to human reviewers within Apple, who determine that the photos are in fact part of the CSAM database. If confirmed by the human reviewer, those photos will be sent to NCMEC, and the user’s account disabled.
Chilling. Why have human reviewers, unless false positives are bound to happen (this is of 100% certainty with the aggregate amount of photos to be scanned)?
So, in effect, Apple has hired human reviewers to police your photos that an algorithm has flagged. Whether you knowingly consent to or not (through some fine print), you are being subjected to a search without probable cause.
(1) I'm a bit frustrated, as a true Apple "bitch", at the irony here. As a loyal consumer, I am (likely) never going to be privileged enough to know exactly which part of Apple's budget allowed for this implementation to occur. I can only assume that such data would speak volumes as to _why_ the decision to introduce CSAM this way has come to light.
(2) I'm equally intrigued by the paradox that in order for the algorithms that perform the CSAM detection to work, it must require some data set that represents these reprehensible images (which are illegal to possess).
> these notifications give the sense that Apple is watching over the user’s shoulder—and in the case of under-13s, that’s essentially what Apple has given parents the ability to do.
Well, yes? Parents are already legally responsible for their young children and under their supervision. The alternative would be to not even give such young children these kind of devices to begin with - which might actually be preferable.
> this system will give parents who do not have the best interests of their children in mind one more way to monitor and control them
True. But the ability to send or receive explicit images would most likely not be the biggest issue they would be facing.
I understand the slippery slope argument the EFF is making, but they should keep to the government angle. Having the ability for governments to deploy specific machine learning classifiers is not a good thing.
I’m very concerned that a bunch of false positives will send people’s nudes to Apple for manual review. I don’t trust apple’s on device ML for something this sensitive. I also can’t imagine that Apple will now not be forced to implement government forced filtering and reporting on iMessage. And this will likely affect others like WhatsApp because now governments know that there is a way to do this on E2E.
What are some other fully encrypted photo options out there?
Lately, I've been on the fence about open source software, and I've been tempted by propietary programs. Mainly because FOSS is much less polished than commercial closed-source software, and I care about polish. I even contemplated buying an Apple M1 at some point.
But now I'm reminded of how fucking awful and hostile Apple and other companies can be. I'm once again 100% convinced that free software is the only way to go, even if I have to endure using software with ugly UIs and bad UX. It will be worth it just not to have to use software written by these assholes.
Apple is part of the power structure of the US. That means that it has a hand in shaping the agenda for the US but with that power comes the responsibility to carry out the agenda.
This also means that it is shielded from attack by the power structure. That is the bargain that the tech industry has struck.
The agenda is always towards increasing power for the power structure. One form of power is information. That means that Apple is inexorably drawn towards increasing surveillance. Also, Apple’s massive customer base both domestic and overseas is a juicy surveillance target.
And if you don’t believe me, ask yourself who holds the keys to iCloud data for both foreign and domestic customers. Ask Apple if it has ever provided data for a foreign customer to the US government. What do you think GDPR is for?
Hint: it isn’t end to end encrypted, Apple doesn’t need your password to read the information, and you will never know
Who the frack would design a system that way and why?
The die was cast with the 2020 elections when Apple decided get into the fray. Much of tech also got into the fray. Once they openly decided to use their power, they couldn’t get back out.
I left Apple behind years ago after using their gear for more than a decade. I recently received a new M1 laptop from work and liked it quite a bit. It's fast, it's quiet, it doesn't get hot. I liked it so much, that I was prepared to go back full Apple for a while. I was briefly reviewing a new iPhone, a M1 mini as a build server, a display, and several accessories to go along with a new M1 laptop for myself. (I don't like to mix work and personal)
Then this news broke. Apple, you just lost several thousand dollars in sales from me. I had items in cart and was pricing everything out when I found this news. I will spend my money elsewhere. This is a horrendous blunder. I will not volunteer myself up to police states by using your gear now or ever again in the future. I've even inquired about returning the work laptop in exchange for a Dell.
Unsafe at any speed. Stallman was right. etc etc etc.
Imagine if the government said they were installing a backdoor in your checking account to 'anonymously' analyze your expenses and payees, 'just to check for known illegal activity'. Every time you use your debit card or pay a bill, the government analyzes it to see if it's 'safe'.
Everyone knows CC transactions are completely shared with government, IRS, bank, credit score companies, etc. Not even close to what's being done here.
I think it's easy to say no to any solution, but harder to say "this is bad, but we should do this instead to solve the problem".
In a world with ubiquitous/distributed communication, the ideas that come up would generally avoid direct interception but need some way to identify a malicious transaction.
When saying no to ideas like this, we should at the same time attempt to also share our thoughts on what would be an acceptable alternative solution.
I think everyone is offended on scanning being done on device and not on their servers (which I had assumed they might already did, quite frankly, Google Photos and others already do), and selling that as being privacy forward.
Considering they hold the keys and the scheme already allows them to decrypt as a last step the users photos, this is not exactly a progress. It just maintains the illusion that those backups are encrypted while they (ultimately) aren't.
I've personally (and some may disagree) always assumed that anything you put in any cloud (and that includes the very convenient iCloud backups that I use) is fair game for local authorities, whether that's true in practice or not.
Putting a "snitch" on device, even if it's only for content that's going to the cloud (and in the case of an iCloud backup, doesn't that mean all your iPhone content ?) is the part that goes a step too far and will lead to laws in other countries asking for even more.
Once you've opened the door to on device scanning, why limit it to data that goes to iCloud ? Why limit it to photos ? They proved they have the "tech" and governments around the world will ask for it to be bent to their needs.
I'm sure the intent was well meaning but I'd much rather they just do this on their premises and not try to pretend they do this for privacy.
Imagine someone was hired to reduce the problem of child trafficking/exploitation, and are the head of this group at the justice dept.
Lets say they have the option to work with private orgs that may have solutions that could walk a fine line between privacy and their dept goals.
I'm interested in knowing your perspective on how one should approach achieving these goals.
Quite frankly I'm not sure this should be the role of one employee of a justice department, so I'm not sure how to answer this ?
At the end of the day I think that any such effort should be legislated, in any jurisdiction, and not just rely on the good will that a government (or one of its employee) can garner from private orgs.
As to what legislation should look like, this is a complex issue, many countries are making various form of interceptions legal (mostly around terrorism) already.
Should a law clearly mandating that any cloud provider scan their users content through some technology like Microsoft's PhotoDNA be passed by various local governments ? I'd much rather see that, personally.
Again, my opposition to this is centered around the fact that Apple is doing this on device and selling this as, as you put it, walking a fine line with their privacy stance.
While it may have been their original intent, I believe they have opened a door for legislators around the world to ask this technology be extended for other purposes, and given time, they absolutely will.
You didn't ask but what I wish Apple did was what everyone else did : scan on their servers, because they can, and not do it on device to keep the illusion that they can't access your photos.
It's not that I don't want to give you a sunny solution which makes the problem go away forever, but this is an extremely difficult problem to solve, especially as someone might be located in some foreign country with ineffective law enforcement.
Facebook has been making it harder for random strangers to contact people under a certain age, so that may well help, and we'll see if it does. And we could probably teach teenagers how to remain safe on the internet, and give the support needed to not be too emotionally reliant on the internet. That might get you part of the way.
You could run TV advertisements to raise awareness about how abuse is harmful to try to dissuade people from doing it, but that might make the general public more scared of it (the chances their family specifically will be affected has to be remote), and more inclined to "regulate" their way out of the problem.
You could try to take more children away from their families on the off-chance they may have been abused, but what if you make the wrong call? That could be traumatizing to them.
You could go down the road of artificial child porn to compete with child porn, and robots which look like children, but I don't think the predators specifically are interested in those, are they? And that comes with some serious ethical issues, and is politically impossible.
We can't just profile "whoever looks suspicious" on the street, because people who are mentally ill tend to behave erratically, only have a slightly high chance of being guilty, but have a dramatically high chance of being harassed by police.
If we can get out of the covid pandemic, this may help. Child abuse is said to have risen by a factor of 4 during the lockdowns, and all those other things which were put in place to contain the virus. It's possible that stress from the pandemic, and perhaps, opportunities to commit a crime may have contributed to this. But, this is an international problem, even if the pandemic were to vanish in the U.S., it may still exist overseas.
Apple is not a dumb company, they did this fully knowing of the backslash they would receive, very likely impacting their bottom line. Two scenarios come to mind:
1. They expect must people will shrug and let themselves be scanned. That is, this privacy invasion will result in minimal damage to the Apple brand, or
2. They know privacy-savvy people will put them from now on on the same league with Android, and they are prepared to take the financial loss.
Scenario 1 is the most plausible, though it hints an impish lack of consideration for their customers.
Scenario 2 worries me most. No smart company does something counter-productive financially unless under dire pressure. What could possibly make Apple shoot itself on the foot and announce it publicly? In other words, Apple's actions, from my perspective, look like a dead canary.
Apple's iPhone revenu just doubled from last year in China -- now 17 billion. Thats not a small number. The play against Huawei has done it's job, apparently -- it's quite mortally injured.
For sure the CCP would love to scan everyone's phones for files or images it finds troubling and for sure every country will eventually be allowed to have its own entries in this database or even their own custom DB.
So my cynical side says...Apple just sold out. MASSIVELY. The loosers -- everyone pretty much that buys their phones.
The CCP can already scan everything server side – iCloud encryption is weaker in China and the servers are controlled by a different entity than Apple. Getting iPhones to scan for illicit content doesn’t help the CCP.
If I keep all my data sequestered on my phone -- which I'm bound to do if I am privacy conscious -- then obviously scanning the phone benefits the CCP.
I keep thinking, It's like they are trying to be the most ironic company in history...
But then I have to remind myself, the old Apple is long gone, the new Apple is a completely different beast, with a very different concept of what it is marketing.
It's the RDF. People still think of Apple as the Old Apple. The rebellious company that stood for creative freedom. The maker of tools that work for the user, not against the user.
This is going to do wonders for Apple's marketshare once the teenagers realize that Apple is going to be turning them in to the police.
Teens are not stupid. They'll eventually clue-in that big brother is watching and won't appreciate it. They'll start by using other messengers instead of imessage and then eventually leaving the ecosystem for Android or whatever else comes down the pike in the future.
Now that we know iPhones have the ability to perform frame-level, on-device PhotoDNA hashing of videos and photos, could the same infrastructure be used to identify media files which are attempting to exploit the long list of buffer overflows that Apple has patched in their image libraries, as recently as 14.7.1?
This would be super useful for iPhone security, e.g. incoming files could be scanned for attempting to use (closed) exploits, when the user can easily associate a malicious media file with the message sender or origin app/site.
On jailbroken devices (e.g. iPhone 7 and earlier with unpatchable boot ROMs), is there a Metasploit equivalent for iOS, which aggregates PoCs for public exploits?
A related question: will PhotoDNA hashing take place continuously or in batch, e.g. overnight? How will associated Battery/Power usage be accounted, e.g. attributed to generic "System" components or itemized separately? If the former, does that create class-action legal exposure for a post-sale change in device "fitness for purpose"?
Are child porn viewers actually going to use iCloud backup? That seems like even the stupidest person would know not to do that.
So I'll propose an alternative theory: Apple is doing this not to actually catch any child pornographers, but to ensure that any CP won't actually reach their servers. Less public good, more self-serving.
I wish there was a privacytools.io for hardware. I've been an iPhone user since the beginning but now I'm interested in alternatives. Last I checked, PinePhone was still being actively developed. Are there any decent phones that strike a balance between privacy and usability?
That too! It's restricted to Pixel devices though, and (I'm not 100% sure on this. It at least doesn't include it.) doesn't support things like MicroG which is a must for getting some apps that rely on Play Services to work correctly. I really think Graphene is only good for hardcore privacy and security enthusiasts, or for situations that actually require the security. I guess it just depends on how much convenience you want to sacrifice.
Serious question- how can anyone know these operating systems are truly secure? Is there a way to test the source code? From a code perspective could Google have placed a back door in Android to access these forks?
You can compile it from the source code yourself if you want. Realistically speaking there may be a backdoor in closed-source Google Play Services, but not in the open-source AOSP project.
For years now congressmen have said stuff along the lines of "exceptional access to encrypted content for law enforcement" ie a backdoor. This is Apple pre-empting any more litigation like Australia, Germany and Ireland's recent privacy violating laws so that governments can just ask Apple to add XYZ prohibited content to their client-side scanner.
I'm not sure what's the point; in this day and age, I'm pretty sure that if your 14 years old wants to send a nude picture, if they really have already reached to that decision, they will do it.
The only practical barrier here is that their parents have educated them and their mental model arrives by its own at "no, this is a very bad idea" instead of "yes, I want to send this pic". Anything else, including petty prohibitions from their parents, will not be a decision factor in most cases. Have we forgotten how it was to be a teenager?
(I mean people, both underage and criminals, will just learn to avoid apple and use other channels)
That’s not what this does. Articles aren’t communicating the details well. There’s a set of known photos of kids going around. They are looking for those specific photos. It’s hash based checks
Yeah I see the detail about matching hashes with well known images from a database... but what triggered my comment is this other function that is mentioned:
> The other feature scans all iMessage images sent or received by child accounts—that is, accounts designated as owned by a minor—for sexually explicit material, and if the child is young enough, notifies the parent
Which seems to be a feature that would allow parents to fix with prohibitions what they didn't achieve with education.
Thats the first feature, the second is, from the article: “The other feature scans all iMessage images sent or received by child accounts—that is, accounts designated as owned by a minor—for sexually explicit material, and if the child is young enough, notifies the parent when these images are sent or received. This feature can be turned on or off by parents.”
Wait until a corrupt govenment starts forcing Apple or Microsoft to scan for leaked documents exposing them and then automatically notifying them. Just one of the many ways this could go wrong in the future.
"”Apple sells iPhones without FaceTime in Saudi Arabia, because local regulation prohibits encrypted phone calls. That's just one example of many where Apple's bent to local pressure. What happens when local regulations in Saudi Arabia mandate that messages be scanned not for child sexual abuse, but for homosexuality or for offenses against the monarchy?”"
Good question. Companies have to follow laws. The naive, early 2000s notion that the internet was unstoppable and ungovernable was mistaken. Apple, Google and the other internet bottlenecks were, it turned out, the pathway to a governable internet. That fight is lost.
Now that it's governable, attention needs to be on those governing... governments, parliaments, etc.
The old version of freedom of speech and such didn't come from the divine. They were created and codified and now we have them. We need to do that again. Declare new, big, hairy freedoms that come with a cost that we have agreed to pay.
There are dichotomies here, and if we deal with them one droplet at a time, they'll be compromised away. "Keep your private messages private" and "Prevent child pornography and terrorism in private messages" are incompatible. But, no one is going to admit that they are choosing between them... not unless there's an absolut-ish principle to defer to.
Once you're scanning email for ad targeting, it's hard to justify not scanning it for child abuse.
This is exactly the event that I’ve been preparing for. I figured out long ago that it’s not a matter of if, but when, Apple fully embraces the surveillance economy. This seems to be a strong step in that direction. As dependant as I’ve been on the Apple ecosystem, I’ve been actively adopting open source solutions in place of the Apple incumbents so that when I have to fully pull the plug, I can at least soften the blow.
In place of Mail: Tutanota
In place of iMessage: Signal
And so on…
Maybe, I haven't seen any numbers. (I've seen several cases from email is or cloud providers IDing specific content and tipping off law enforcement, but not aggregate stats.)
> Most of the major cases involve seizures of offline hardrives.
Are most cases major cases? Are even most of the individuals caught caught in major cases (I doubt it; the number of publicized major caelses and the number claimed caught in each, and the total number of cases don't seem to line up with that.)
And even for the major cases, how do they get the initial leads that they work back to?
It will take a matter of days for other parties including copyright holders, if they have not already, to get in on this action. The infrastructure will then be compromised by human int so that it can be used to intelligence agencies to find people hitting red flag words like snowden and wikileaks. But lets be real for a moment that anyone who thinks apple cares about security or privacy over profits is in some way kidding themselves.
> This means that when the features are rolled out, a version of the NCMEC CSAM database will be uploaded onto every single iPhone.
Question - if most people literally don't want to have anything to do with CP, isn't uploading of a hash database of that material to their phones precisely that?
For once I think I will feel disgusted walking around with my phone in a pocket; a phone that is full of hashes of child porn. That's a terrible feeling.
Because then they wouldn't be able hook these children in like junkies as easily.
Having a hard time buying this is about "the kids" or children in any way, shape or form. This is typical erosion of privacy under a worn out flag, just more emotional manipulation.
Have you seen what smartphones have done to people, especially children? Apple, Google, Facebook, Twitter, the whole lot of them. They are out to destroy children, not save them. If they thought they could "generate" 1 more dollar in "value" they'd be selling these abhorrent images to the highest bidder.
“The threshold is set to provide an extremely high level of accuracy and ensures less than a one in one trillion chance per year of incorrectly flagging a given account.”
How did they calculate this? Also, I can imagine more than a trillion photos being uploaded to iCloud a year.
If Apple broadcasts their surveillance strategy so publicly, wouldn't criminals stop using Apple products and delete their iCloud data immediately? Who will be left to "catch" at that point? The most incompetent criminals?
I'm missing how this will actually work if perpetrators knew Apple was going to analyze their data beforehand. Could someone explain?
The article spends time on the implications for kids messaging other kids. Though I think parents as a group might tend to lean more towards wanting that snooping going on.
Separate from kids, I wonder whether Apple's is yet shooting itself in the foot for teens.
Teens should start caring about privacy around then, are very peer/fashion-sensitive, and have shown that they'll readily abandon platforms. Many parents/teachers/others still want to be treating teens as children under their power, but teens have significant OPSEC motivation and ability.
Personally, I'd love to see genuinely good privacy&security products rushing to serve the huge market of a newly clueful late-teen generation. The cluefulness seems like it would be good for society, and market forces mean the rest of us then might also be able to buy products that aren't ridiculously insecure and invasive.
I have a question, does this mean that Apple will have a way to decrypt photos in iCloud?
It seems this can then be a security risk, since Apple could be breached and they'd have the means to server side decrypt things.
If it was simply that client side end to end encryption can be turned on/off based on if the account is a child account or not (or as a configuration for parental control) that be different.
As just a config, then I mean the slippery slope always existed, Apple could always just be forced into changing the settings of what gets end to end encrypted and when.
But if this means that all photos are sent unencrypted to Apple at some point, or sent to Apple in a way they can decrypt, then it does open the door to your photos not being securely stored and attackers being able to steal them. That seems a bit of an issue.
I hate to break it to you but Apple backtracked from their plan to e2e encrypt iCloud backups. Allegedly after being pressured by FBI: https://www.bbc.com/news/technology-51207744
They have the encryption key that allows them to read their customer data.
On device data scan, however well-intended it may be, is an invasion of privacy. Server scan is entirely different matter, because it is an optional service which may come with any clauses its provider may deem necessary.
I understand that it doesn't scan everything, but it don't matter. What matter is there's an implemented technical capability to run scans against external fingerprint database. it's a tool which may be used for many needs.
I hope some countries will prohibit Apple doing that. Germany with its strict anti-snooping laws comes to mind. Maybe Japan. The more, the better.
Oh, and by the way, every tech-savvy sex predator now knows what they should avoid doing. As always with mass privacy invasions: criminals are the last to suffer from it.
So we have a person that is technical enough to find known CP, so the stuff that's already automatically filtered out by Google and co because those same hashes are already checked against for all images they index. So knowledge of dark web should be assumed, something I don't even know how to use let alone how find the filth on there.
Yet....dumb enough to upload it unencrypted to iCloud instead of storing it in a strongly encrypted folder on their PC?
The two circles in this diagram have a very thin overlap I think.
Dumb move by Apple, privacy is either 100% private or not private.
Unless somebody can enlighten me that like 23% of all investigated pedophiles that had an iPhone seized had unencrypted CP on their iCloud accounts? I am willing to be proven wrong here.
For me, this crosses a line. There should be no need to "strike a balance" with authorities wanting what are essentially unwarranted searches. The right balance is, "fuck off".
I'm looking into privacy phones for the first time and will be switching.
_Hashes_ of photos will be scanned for _known_ abusive material, client side.
So the only thing Apple can find out about you is if you have some of these known and catalogued images. They will definitely not know if you have other nude photos, including of your children.
The other, separate feature is a parental control feature. You as a parent can be told if your children send or receive nude photos. This obviously sacrifices some of their privacy, but that is what parenting is. It's not more intrusive than screentime, or any number of things you might do as a parent to make sure your children are safe..
>"The (unauditable) database of processed CSAM images will be distributed in the operating system (OS), the processed images transformed so that users cannot see what the image is, and matching done on those transformed images using private set intersection where the device will not know whether a match has been found"
Am I reading this correctly in that Apple will essentially be pushing out contraband images to user's phones? Couldn't the existence of these images on a user's phone potentially have consequences and potentially be used against an unwitting iPhone user?
I think the issue is that what the tech community sees as privacy is different than what the general public thinks of as privacy.
Apple, very astutely, understands that difference and exploited the latter to differentiate its phones from its main competitor: cheap(er) android phones.
Apple didn’t want the phones to be commoditized, like personal computers before it. And “privacy” is something that you can’t commoditize. Once you own that association, it is hard to fight against it.
Apple also understands that the general public will support its anti child exploitation and the public will not see this as a violation of privacy.
Gonna get downvoted for this, I maybe the few that supports this and I hope they catch these child exploiters by the boat load and save 1000s of kids from traffickers and jail their asses
Advancement of the surveillance state is especially terrifying after this past summer of police abuse. We already know that in our country people in power abuse their authority and nothing happens (unless international protests prompt an action). This just collects more power under the disgusting guise of "won't somebody think of the children" while calling the people opposed pedophile supporters.
Does anybody have recommendations on what to do to help oppose this instead of just feeling helpless?
I think this is probably the reasonable and responsible thing for Apple to do as a company, even if it it goes against their privacy ethos. Honestly they probably have been advised by their own lawyers that this is the only way to cover themselves and protect shareholder value.
The question will be if Apple will bend to requests to leverage this for other reasons less noble than the protection of children. Apple has a lot of power to say no right now, but they might not always have that power in the future.
The privacy creep usually happens by building narratives around CSAM. Yes, agreed it was objectionable, but there was no "scientific analysis" that such measures would prevent dissemination in the first place.
Surveillance is morally discreditable, and Apple seems to have tested the waters well - by building a privacy narrative and then screwing the users in the process. Most users believe it is "good for them". Though, it remains the most restrictive system.
I honestly fail to see how the “oppressive regimes could just turn the on-device scanning into a state surveillance tool” is not a slippery slope arguments when on-device scanning and classification (NN for image processing and classification) has been going on for years on iOS devices.
It just seem very paradoxical to be using a cloud based photo and/or un-encrypted backup service and then worry about one’s privacy being at risk.
I guess Apple has given up on Apple Pay and becoming a bank. Without that as motivation for security this is probably the first of many compromises to come.
I think it's becoming very apparent that through apathy, indoctrination, and fear that freedom will be well and truly stamped out.
You just have to say for the greater good and you can get away with anything. Over the last year and half so many have been desensitised to over bearing collectivism that at this stage i think governments and their any Big Corp lackeys could get away with just about anything now.
I fully support this. History has shown us that humanity and especially their governments are very well equipped to deal with near godlike power of surveillance. There are basically no examples of this power being abused through all of history. Maybe a couple of bad apples. We should really look into how this can be expanded. Imagine if crime could be stopped before it starts.
Nobody is talking about the performance implications to the photos and messages app. All these image hashes and private set intersection operations are going to eat CPU and battery life.
This is the downside to upgrading your iOS version. Once you update, it's not like you can go back, either. You're stuck with a slower, more power-hungry phone for the life of the phone.
Who ordered Apple to do this, "or else?" What was the "or else?" How easy will it be to expand this capability by Apple or anyone outside of Apple?
I expect that any time you take a photo, the scan will be performed right away, and the results file will be waiting to be sent the next time you enable voice and data.
This capability crushes the trustworthiness of the devices.
Does Apple really think those bastards share their disgusting content via iCloud or message themself via iMessage? Even if some idiots did, they'll stop by now. So even if Apple has pure good intentions it'll be pretty useless and so Apple don't even have to start with these kind of questionable practices.
All of my hardware is outdated so I was about to make the jump to Apple all across the board. Now I’m probably going to dive into the deep end and go into FOSS full throttle. I’m going to investigate Linux OEM vendors tonigh. The only one that I know of is System 76. Are there any Linux based iPad competitors?
Thinkpads also run Linux very well. I've got an X1 Carbon 7th gen running Pop!_OS, and everything on the machine works, including the fancy microphones on top (haven't tried the fingerprint reader though).
When u upload any build to app store, before you can have it in testflight or submit it for release, you have to fill out this questionnaire asking "does your app use encryption?" If you say yes, you're basically fucked, good luck releasing it.. You have to say no as far as I'm aware.
Bad Apple. Today it is something socially unacceptable - child exploitation. The reason that is used as a reason is plainly obvious. What will be the next socially unacceptable target? Guess it depends on who the ruling class. Very disappointed in this company’s decision.
Can the legions of Apple Apologists on this forum at-least agree that all the talk about how well the iPhone supports individual privacy is just a bunch of bald-faced lies ?
I mean they use the privacy argument to avoid side-loading apps, lol. But scanning your photos is OK.
Technically speaking, if Apple plans to perform PSI on device (as opposed to what Microsoft does), how come that "the device will not know whether a match has been found"?
Is there anyone who's familiar with the technology so they can explain how it works?
But the claim is that Apple does that "on device". To the best of my understanding, this would mean that both parties in the PSI protocol are "on the same device". Do they probably use some kind of TEE (Trusted Execution Environment) to evaluate the "other side" of the PSI protocol?
Apple scanning for law enforcement in 1 country gives proof of concept for another country to ask for the same for their own laws. And with a big enough market can easily arm twist Apple to comply as $$ means more than all privacy they talk about.
I get the concern, but "Corporation X can be compromised by the State, which is evil" is not a problem with the corporation. It's a problem with your civilization.
If you don't trust the rule of law, Apple can't fix that for you.
The FBI doesn't even have the resources to review all the reports they do get (we learned that in 2019), and yet they want to intrude on everyone's rights to get even more to investigate (which they won't).
What’s to stop a malicious person from sending a prohibited image to an unsuspecting person, and causing the target to get into legal trouble for which there is no legal defense ("strict liability" for possession).
i’m ashamed of every single apple employee who worked to make this happen. their work will be used to subjugate the most vulnerable among us. i hope you all hate yourselves forever for your cowardice and immorality.
Luckily I only use phone to make phone calls, offline GPS and to control some gizmos like drones. Do not even have data plan. Not an Apple customer either so I guess my exposure to things mentioned is more limited.
I'd be surprised if this goes through as is since you can't just save this stuff indefinitely. Suppose a 14 year old sexts a 12 year old. That is technically child porn and so retention is often illegal.
Once this tech is implemented, courts will direct it to be used in situations Apple did not intend. Apple will have created a capability and the courts will interpret refusal to expand its use as contempt.
“ When Apple releases these “client-side scanning” functionalities, users of iCloud Photos, child users of iMessage, and anyone who talks to a minor through iMessage will have to carefully consider their privacy and security priorities in light of the changes, and possibly be unable to safely use what until this development is one of the preeminent encrypted messengers.”
People sending messages to minors that trigger a hash match have more fundamental things to consider, as they are sending known photos of child exploitation to a minor.
The EFF writer knows this, as they describe the feature in the article. They should be ashamed of publishing this crap.
You’ve got it mixed up. The messages are scanned for any explicit material (which in many but not all cases is illegal), not specific hash matches. That’s only for uploads to iCloud Photos.
Additionally, you are not “obliged” to report such photos to the police. Uninvolved service providers do have to submit some sort of report iirc, but to require regular users to do so would raise Fifth Amendment concerns.
No, I’m not. You’re confusing the issues. If you have a child’s account, joined to your family group, you will get alerted about explicit images — if you decide to use it.
The photos that you are obliged to report are child pornography that match a hash in a database used everywhere. If you don’t report, you’re in a place where you may have criminal liability.
> they are sending known photos of child exploitation to a minor
How do you know its a known photo of child exploitation? The original image that was hashed and then deleted. Two completely different images have the same hash.
WhatsApp automatically saves images to photos. What if you receive a bad image and are reported due to someone else sending the image to you?
So if your Apple ID/icloud gets compromised, and somebody save an album of CP to your icloud photos, it is then only a question of time until the police comes knocking?
Why don’t they just run their trained classifier on the phone itself to do this stuff. There should not be any need to do this on the server no matter what they say.
“ It is used on Microsoft's own services including Bing and OneDrive,[4] as well as by Google's Gmail, Twitter,[5] Facebook,[6] Adobe Systems,[7] Reddit,[8] Discord[9] and the NCMEC,[10] to whom Microsoft donated the technology.”
Does it matter if the project goes live? Once the company's attitude towards user privacy and customer concerns has been revealed, what's there left to hang onto?
You confused the 2 new features. The child pornography detector compares perceptual hashes. The iMessage filter tries to classify sexually explicit images.
Could you explain please - can these hash comparisons be extended to other areas such as contextual analysis of photos or texts?
For example would it be easy now to get to the hypothetical scenario where texts containing certain phrases will be flagged if some partner / regulator demands that?
Or doing face recognition on images, etc.? Or is this still completely different from that
The question that should be asked is if you think it's ok if the U.S. gov't looks at every picture you take and have taken and store and will store. The U.S. gov't will access, store, and track that information on you for your whole life. Past pictures. Present pictures. Future pictures.
I don't use apple products, but if I found out google was scanning my photos on photos.google.com on behalf of the government I would drop them. I'm not saying it wouldn't hurt, because it definitely would, but in a capitalistic country this is the only way to fight back.
Your smartphone or desktop computer is your agent. You can't accomplish many necessary tasks without it, you're nearly required by law to use one. It handles your most private data, and yet you have no real visibility into its actions. You just have to trust it.
As such, it should NEVER do anything that isn't in your best interest-- to the greatest extent possible under the law. Your relationship with your personal computer is closer and more trusted than your relationship with your doctor or lawyer-- in fact, you often communicate with these parties via your computer.
We respect the confidentiality you enjoy with your professional agents but that confidentiality cannot functionally exist if your computing devices are not equally duty bound to act in their users best interest!
This snitching 'feature' is a fairly general purpose tracing/tracking mechanism-- We are to assume that the perceptual hashes are exclusively of unlawful images (though I can't actually find a firm, binding assertion of that!)-- but there is nothing assuring that to us except for blind trust.
Even if the list today exclusively has unlawful images there is no guarantee that tomorrow it won't have something different-- no guarantee that some hysterical political expediency won't put images associated with your (non-)religion or ethnicity into it, no guarantee that the facility serving these lists won't be hacked or abused by insiders. Considering that possession of child porn is a strict liability crime, Apple themselves has presumably not validated the content of the list themselves and certainly you won't be allowed to check it. Moreover, even if there were some independent vetting of the list content there is nothing that would prevent targeted parties from being given a different unvetted list without their knowledge.
The pervasive scanning can also be expected to dramatically increases the effectiveness of framing. It's kind of cliche that the guilty person often claims "I was framed"-- but part of the reason that framing is rare is because the false evidence has to intersect a credibly motivated investigation, and they seldom do except where there are other indicators of guilt. With automated scanning it would be much more reliable to cause someone a world of trouble by slipping some indicated material on their device, and so framing would have a much better cost/benefit trade-off.
Any of the above flaws are sufficiently fatal on their own-- but add to it the potential for inadvertent false positives both in the hash matching and in the construction of the lists. Worse, it'll probably be argued that the detailed operation of the system must be kept secret from the very users whos systems it runs on specifically because knowledge of the operation would greatly simplify the malicious construction of intentional false positives which could be used for harassment by causing spurious investigations.
In my view Apple's actions here aren't just inappropriate, they're unambiguously unethical and in a more thoughtful world they'd be a violation of the law.
First they built a walled garden beautiful on the inside and excoriated competitors [1] for their lack of privacy. Now that the frogs have walked into the walled garden, they have started to boil the pot [2] . I don’t think the frogs will ever find out when to get off the pot.
> OS and iPadOS will use new applications of cryptography to help limit the spread of CSAM online, while designing for user privacy. CSAM detection will help Apple provide valuable information to law enforcement on collections of CSAM in iCloud Photos.
Apple's battle is against Surveillance Capitalism, not against state-level surveillance. In fact, there is no publicly traded company that is against state-level surveillance. It's important not to confuse the two.
Think of it this way: If you want to hide from companies, choose Apple. If you want to hide from the US Government, choose open source.
But if your threat model really does include the US government or some other similarly capable adversary, you are well and truly fucked already. The state-level apparatus for spying on folks through metadata and traffic interception is now mode than a decade old.
> Think of it this way: If you want to hide from companies, choose Apple. If you want to hide from the US Government, choose open source.
It's not just the US government: they've been cooperating with the PRC government as well (e.g. iCloud in China runs on servers owned by a state-owned company, and apparently China rejected the HSM Apple was using elsewhere, so they designed one specifically for China). Apple has some deniability there, but I personally wouldn't be surprised if China could get any data from them that it wanted.
Both the US government and Chinese government can get whatever they want from both iCloud and IMessage. Best not to use it for anything that could make you a target of theirs.
The problem is that as governments gain access to new technological capabilities and exploit crises to acquire more emergency powers, increasingly large numbers of peoples’ threat models begin to include government.
The best hopes against a population-wide Chinese-style social credit system being implemented in the US remain constitutional and cultural, but the more architectural help we get from technology the better. “Code is law” is still a valid observation.
The US has a rather weak central government. The Chinese-style social credit system isn't necessary, because private corporations already do it. Scanning your ID when you return things, advertising profiles, etc.
Yeah, sure. I’m happy to be downvoted to hell, but I know people who would have benefit greatly from this (perhaps have entirely different lives) if it were implemented 10 years ago.
Convince me that a strong step to ending CSA at the expense of a little privacy is a bad thing.
Moral panics are nothing new, and have now graduated into the digital age. The last big one I remember was passage of the DMCA in 1999; it was just absolutely guaranteed to kill the Internet! And as per usual, the Chicken Littles the world were proven wrong. The sky will not fall in this case, either. Unfortunately civilization has produced such abundance and free time that outage viruses like this one will always circulate.
Moral panics are nothing new, and have now graduated into the digital age. The last big one I remember was passage of the DMCA in 1999; it was just absolutely guaranteed to kill the Internet! And as per usual, the Chicken Littles the world were proven wrong. The sky will not fall in this case, either. Unfortunately civilization has produced such abundance and free time that outage viruses like this one will always circulate. Humans need something to spend their energy on.
> No user receives any CSAM photo, not even in encrypted form. Users receive a data structure of blinded fingerprints of photos in the CSAM database. Users cannot recover these fingerprints and therefore cannot use them to identify which photos are in the CSAM database.
I'm sorry but this is the most ridiculous thing I've read today. Hashes have never and probably will never be used "smear" someone the US doesn't like. We can speculate about them planting evidence but trying to prosecute based on hashes baked into the OS used by millions? That's absurd.
I'm pretty sure this is a non-tech way of saying "a machine learning model" or other parameters which is not a particularly useful form of this database.
There is a minimum number of hash matches required, then images are made available to Apple who then manually checks that they are CSAM material and not just collisions. That's what the 9to5Mac story about this says: https://9to5mac.com/2021/08/05/apple-announces-new-protectio...
With a broader rollout to all accounts and simply scanning in iMessage rather than photos there's one possible scenario if you could generate images which were plausibly real photos: spam them to someone before an election, let friendly law enforcement talk about the investigation, and let them discover how hard it is to prove that you didn't delete the original image which was used to generate the fingerprint. Variations abound: target that teacher who gave you a bad grade, etc. The idea would be credibility laundering: “Apple flagged their phone” sounds more like there's something there than, say, a leak to the tabloids or a police investigation run by a political rival.
This is technically possible now but requires you to actually have access to seriously illegal material. A feasible collision process would make it a lot easier for someone to avoid having something which could directly result in a jail sentence.
So you can upload the colliding images to iCloud and get yourself reported for having child porn. Then after the law comes down on you, you can prove that you didn't ever have child porn. And you can sue Apple for libel, falsely reporting a crime, whatever else they did. It would be a clever bit of tech activism.
This technology uses secret sharing to ensure a threshold of images are met before photos are flagged. In this case, it's even more private than CCTV.
Totalitarian regimes to do not need some magic bit of technology to abuse citizens; that's been clear since the dawn of time. Those who are concerned about abuse would do well to direct their efforts towards maintenance of democratic systems: upholding societal, political, regulatory and legal checks and balances.
Criminals are becoming better criminals by taking advantage of advancements in technology right now, and, for better or worse, it's an arms race and society will simply not accept criminals gaining the upper hand.
If not proven necessary, society is capable of reverting to prior standards (Habeas Corpus resumed after the Civil War, and parts of the Patriot Act have expired, for example.).
You link to an article that says ... "Overall, the evidence suggests that CCTV c an reduce crime.". And then continues mention that specific context matters: Vehicle crime ... oh well, I wonder if we could combat that without surveilance, like better locks, remote disable of the engine ... There as here with the phones, society has to evaluate the price of the loss of privacy and abuse by totalitarien systems, which will happen - we just can't say when. This is why some - like me - resist backdoors at all if for the price of "more crime".
https://news.ycombinator.com/item?id=28068741
https://news.ycombinator.com/item?id=28075021
https://news.ycombinator.com/item?id=28078115