Let's not let the perfect be the enemy of the good.
This is a huge step forward for iPhone users. Look, I get it. From the typical HN perspective, this potentially looks like a lot of hype. But many of you aren't looking at from a high level.
In the world we are now living in; even what's happening in the United States right now, being able to protect yourself from well-funded, determined attackers for the average person couldn't come at a better time.
There's a huge gap between Fortune 500 executives, government officials, etc. and regular people in terms of the resources available to them to prevent state-sponsored attackers. It doesn't take much these days to go from a nobody to being on somebody's radar.
If you're a woman seeking an abortion in a state where it's illegal or severely restricted, you could be the target of malware from your local or state government or law enforcement. In Texas, you can sue anyone who aids and abets a woman who attempts to get an abortion for $10,000, which is enough to get someone to trick someone into installing malware on a phone.
No, it's not China or Russia coming for you but it doesn't take much to ruin someone's life.
I don't think this is virtue signaling or marketing hype by Apple; if anything, this is right in alignment with the stance they've had on privacy for years. Even for a company the size of Apple, putting up $10 million to fund organizations that investigate, expose, and prevent highly targeted cyberattacks isn't pocket change.
At the end of the day, this is all good news for user privacy and security going forward. I also suspect if I lockdown my iPhone, my other compatible devices using the same Apple ID will also lockdown. No IT department required.
> In Texas, you can sue anyone who aids and abets a woman who attempts to get an abortion for $10,000, which is enough to get someone to trick someone into installing malware on a phone.
Anecdata for people who think this is unlikely: my wife had an issue getting unclaimed property back from the state of Texas and hired someone who advertise the ability to help. She turned out to be a bulldog with a ton of knowledge of the necessary bureaucracy. She put hours per week into it on our behalf for months, through many rounds of filing paperwork and then hounding bureaucrats on the phone by telling them exactly how and why we could sue if they ignored it. She did all that for a cut that was a fraction of the $10k abortion bounty. The $10k might seem like a symbolic gesture, but it will spawn a cottage industry of bounty hunters. No doubt most of them will be ideologically excited wannabes who quickly give it up, but some will be dogged and effective and will cultivate an expanding repertoire of skills. It's a terrifying prospect.
There will be many, many people who never previously entertained the idea of getting involved in serious criminality who now need protection from the prying eyes of the state and their fellow citizens. To look at it from a cold and opportunistic viewpoint, this could change the public perception of digital privacy from being just for dangerous creepy people to something that everybody should value.
To add to this: the whole point of the private right to action is so that anti-abortion groups can target individuals in order to create precedent-setting cases. This is a mechanism that is designed to be used by well-funded groups. The threat model here isn’t some rando deciding they want to sue you, it’s a team of determined lawyers that absolutely will take your case as far as they possibly can.
> the whole point of the private right to action is so that anti-abortion groups can target individuals in order to create precedent-setting cases.
Fairly sure this is wrong. The point was to create a mechanism to sue various people "in orbit" around an abortion without involving state officials. This was supposed to "immunize" the process from any Roe v. Wade-related block.
With Roe v. Wade now struck down, Texas can basically do whatever "it" wants w.r.t abortion, and the federal government cannot intervene. SB8 at this point is possibly (just possibly) a way to reduce state spending on abortion legal cases, but not much more beyond that.
It's directly (and I believe explicitly) modelled on the Americans with Disabilities Act. The ADA creates a model in which private citizens can and do bring lawsuits against all types of organisations for any type of harm they can define.
This has spun out a cottage industry of disabled people who's full time occupation is visiting everything from websites to restaurants, being harmed and bringing lawsuits. While that may sound like a bad thing, it is in fact a very very cost effective way of enforcing the law quite effectively without bureaucratic bloat. Strangely, it's been quite successful. The history of why this decision was made is very interesting.
For all your devs, this is why large American companies care so much about accessibility on their websites - because it creates an almost unlimited liability on their end if you do it badly. Companies now scan websites for accessibility as soon as they're launched, then others will buy the set of companies which 'fail', then visit those sites in order to be harmed. It's an interesting little cottage industry which keeps legitimate disability rights enforced quite nicely without too big a government.
The purpose of the private right to action was to get around Roe/Casey prior to the Supreme Court overruling both cases. The law was specifically designed to evade judicial review.
As a private plaintiff, you can typically sue a state official that is charged with enforcing a law in federal court on constitutional grounds. SB8 is written in such a way that state officials are barred from enforcing the law. Thus, it is effectively impossible to challenge in federal court because there is no state official that enforces the law, only private citizens, and thus there is no proper defendant.
My impression is that it fits the pattern of trying to disrupt society and government and create a vigilante citizenry, similar to encouraging people to arm themselves and use their firearms to prevent crimes.
No idea what you're even trying to reference in your second sentence, but the first sentence "community law enforcement" is a red flag in my book. The law creates a fiscal incentive for people to report their neighbors for actions that were federally protected at the time this law was passed. Neighbor vs. neighbor. Citizen vs. citizen. We spend more on policing than any country in the world and yet still need to deputize citizens in a heavily armed state? It's not my neighbor's damn business to know if someone in my household seeks an abortion.
If fiscally incentivizing vigilantism isn't dystopian I don't know what is.
Deputization and vigilantism are antonyms, your framing is incoherent.
An elected legislature sanctioning civil action is "dystopian", but rioting and arson? Intimidating judges at their homes? Laundering a decade of domestic terrorism into universities and district attorneys' offices? Never heard of that stuff!
The discussion was about abortion in the context of digital privacy. You are the one who brought up all of these other things, which have nothing to do with the topic at hand. It's whataboutism and not worth engaging with.
Vigilantism without oversight, checks and balances quickly devolves into posses terrorizing and brutalizing people they simply don't like. The US has a long history of this.
Vigilantism is not the same thing as community-led policing from members of said communities.
That's nice, but none of this has anything to do with "vigilantism." The other guy only used that word because he thought it sounded scary. If either of you knew what it meant you would understand that government-sanctioned action, for example a lawsuit with standing provided by statute, is the complete opposite of vigilantism in its entire definition.
I've never heard that term. Usually, the state has a monopoly on violence and justice - that's a definition of a sovereign state. Law enforcement is performed by police. Not sure where the other stuff you mention comes from.
I've been warned before by dang here on this site not to spew Anti-American propaganda (that was pre Jan 6, I think), but I never did such a thing. When I studied in SF in 1999, I freaking loved it. But I've seen some things since that are deeply troubling. It seems more people are catching up now to what I observed: if you still think that the US is a modern western democracy with reasonable values, wake up. I mean, people hunting other people who need an abortion for $10K? How can you read that and not have a cold chill running down your spine?
> if you still think that the US is a modern western democracy with reasonable values, wake up
One of the quirks, and ongoing debates, of the US is the strong deference to states’ rights. Don’t confuse US law with Texas law. The majority of the population of the United States actually lives in states with abortion laws that are more liberal than what you’d find in the EU, for example.
The state versus federal distinction can be very confusing to people who view US politics through the lens of the worst news stories that come out of every state. The entire US has a land mass and population on the same order as that of the entire EU, and many states have populations similar to that of individual EU countries. We have a single state (California) that has an economy larger than all of the UK combined and almost as large as India.
The United States is big and diverse. We’re going through a phase where federal power is being reduced due to politics and some of the states are doing weird stuff. If you only view the US through news stories and imagine the US as a conglomeration of all of the worst and weirdest news stories from individual states, you’re going to have a very negative view of the US in general.
This kind of reasoning is exactly the problem that the US faces. "It's not really that bad, it's just a few silly states, overall we do know better". First, Texas is a pretty big state, too. You cannot just discount it as not mattering to the overall picture. Second, you are ONE country, you have ONE president. And what the majority of Americans think, doesn't seem to matter when it comes to the law, or to elections. Keep telling yourself that's it's not that bad because it's so diverse, and soon it will be much less diverse than you can imagine right now.
> Second, you are ONE country, you have ONE president.
We also have fifty governors, 100 senators, 435 house reps, nine supreme court justices, and countless state legislators. We do not live in a dictatorship. Yet.
Of these, it's the court that has changed most wildly over the past 8 years.
> soon it will be much less diverse than you can imagine right now.
I think it's possible to say "overturning Roe v. Wade didn't make abortion illegal in California, as your worst case presumption might assume" and still believe that GOP gerrymandering, Supreme Court appointments, and attempted coups are an existential threat to majority rule.
Of course nuance is important when thinking about solutions to the problem. But if a substantial portion of the country (I don't know, is it 30%?) is basically not democratic anymore, you better be quick with coming up and implementing a solution. And how exactly is a solution to look like then without a civil war?
As a point of clarity, those state laws are all passed by democratically elected representatives. Gerrymandering may impact outcomes, but Ds are every bit as good at it as Rs. For example, Oregon's legislative districts are comically gerrymandered by Democrats.
Well said. Definitely agree that it’s a ridiculous to say “well those other states aren’t that big of a deal.”
For the 4.5 million of us in Louisiana, the current laws are a pretty huge deal. But according to him we apparently don’t matter when having a national dialogue.
For the 4.5 million of us in Louisiana, the current laws are a pretty huge deal.
Yes, but the idea that those laws are being imposed on an unwilling population by an extremist minority is wrong. Half of Louisiana residents believe abortion should be illegal in most or all cases (https://www.theadvocate.com/baton_rouge/news/article_4973b4e...), and many in the other half likely support restrictions that weren't allowed under Roe/Casey. This is what democracy looks like, and an example of how democracy isn't always a good thing.
The margin of error is 5.8% I.e. the majority could also be in favor of abortion access. And even if we concede most want it gone, a slim majority does not in any way mean it should be denied to other people. We also need to define “most,” that’s a bad phrasing of the question IMO. For instance, we banned in cases of rape or incest. I’m sure plenty of people who are otherwise against it make a provision for that, but the question makes no distinction about some of those more divisive situations.
The GOP controls this state in a wildly disproportionate way. They passed it because of that, not because of a possible slim majority. We’d have legalized weed if that’s all it took.
Unlimited tolerance must not be extended to the aggressively intolerant, because this will destroy the unlimited tolerance. Central philosophical principle of free society. Karl Popper.
Incorrect. There is no such act as "not allowed to exist". Instead, rather than attempt to bury what is spontaneously manifest, we ought lift these bolstered options up and publicly demonstrate to all how they are torn asunder.
The people in those states are your fellow citizens. If you don't care about their well being, then you might as well split now. What exactly makes you United besides a cultish devotion to your origin story and a self image of Freedom Loving that hasn't matched reality since your independence.
The freedom for individual states rights you espouse has always been used for almost exclusively civil right violations.
If you don't do something about that, you are complicit. Now you might say there is nothing you CAN do about it.
But that is the problem. That is WHY the rest of the free world looks at you and says "You are not a democracy."
Because you couldn't change this even if you wanted to.
The last time you tried you came close. You had to fight a war over it but you almost got there. Then you fucked it up during the Reconstruction.
That's clunky grammar then. It's not trivial to context-switch and even use the same word 'large' for it: "We have a single state (California) that has an economy larger than all of the UK combined and almost as large as India."
I think it's ambiguous at best.
I’ve spend four years in IL and consider them among the happiest in my life
The US right now is a fucking shitshow. It’s one bad election away from being yet another gunslinging theocracy hating women and gay people. They’d probably switch sides and bomb Ukraine, without necessarily looking at a map.
People talk about the happy and glory years in the US in contrast to what is happening today as if both aren't borne out of the same root cause.
When you don't have regulations, strong federal oversight, high taxes, or invest in social programs, then you can have a fucking excellent party nearly all of the time.
Until the economy tanks or the American Taliban decides their party involves telling you what you can do with your body.
It's a very immature/libertarian way to run a society. Wonderful when things are good. Horrific when things are bad.
The stakes are higher than individual hedonism now, though. American Prosperity is boiling our atmosphere and by the end of the century, excess American contributions to carbon emissions will have killed more people in the 3rd world than Hitler and Stalin put together. This is not an exaggeration. Hundreds of millions will die in Bangladesh and India from rising seas and heatwaves because Americans wanted the freedom of the house, the picket fence, the 2 F150s, and the 2 hour commute, and Next Day Shipping From Amazon for All The Things.
Well if Trump or his followers get the top seat again, Ukraine, and with it half of Europe is fucked. He was pretty clear about that.
With this, steep decline in US power projection is inevitable, I mean you can't lose half a billion big rich western population almost 100% aligned with your values.
You say that but where are they going to align themselves to? China? The US benefits by having a lack of good competition. Things would have to get extremely bad for the rest of the west to dump the US. Word on the street is that Trump is going to announce a run for 2024 as a way to get ahead of his rivals. He has a reasonable shot at winning barring unforeseen circumstances. You can't dismiss the odds given how incredibly poorly the Democrats have messed up their two years since taking office.
I feel that if given another Trump win, the rest of the west will be forced to remain in another holding pattern for four years and suffer whatever consequences occur hoping that four years later things improve.
On the short term that sounds about right. Still I would guess that if the EU and US relations would go from more of a culture friendship to a strict transactional nature that would have big consequences. The EU would try to be more self sufficient and for one import less from the US. The EU would probably try to find closer relations to countries with semi big military, totally guessing here India, South Korea, Japan, Australia, New Zealand, Turkey. NATO would of course start to look more shaky and the idea of an EU army more of a possibility. Maybe even a new NATO would form without the US?
I could see the EU being more self sufficient. But at the same time it will be difficult given that they have a serious population decline. You need that population to grow the GDP. Furthermore some essential industries seem to be completely abandoned by the EU. (Looking at competitors to the FAANG companies).
>The EU would probably try to find closer relations to countries with semi big military, totally guessing here India, South Korea, Japan, Australia, New Zealand, Turkey.
This is assuming they have the capability to project forces anywhere in the world which given these countries you listed, feels unlikely either now or in the distant future.
I hadn't thought about this, but you are right. Hell, they don't necessarily even have to be immediately targeted attacks bounty hunters. Try to perform attacks in mass to read personal messages/e-mails of people, use filtering to try to find messages of people discussing getting abortions, and then parallel construct a innocent sounding story to use in court. With 10k per success, you really don't need that many hits to start making big money.
Also, I personally know many old people who use a device just for managing their finances as they are inexperienced with security and fear their main device might get hacked.
This functionality makes a lot of sense in such a case.
Yeah except putting malware on someone's phone is actually illegal, so seems like a pretty bad tradeoff since, ya know, you'd have to mention how you got the data when you sue someone in court.
Police use this sort of tactic (parallel construction) all the time, though: they collect evidence in ways not admissible in court, but use knowledge of that evidence to find new lines of investigation and new evidence that can be admissible in court.
Presumably someone could use malware on someone's phone to know who to target with an abortion-related lawsuit, and then use legal forms of investigation to find evidence to prove that they got an abortion.
The trick of course is that the malware can't be traced back to the police. Otherwise, the parallel construction narrative vanishes, as well as potentially a bunch of previous convictions that were constructed using the same technique -- At least until the conservative supreme court neuters the 4th amendment.
This needs to be the case of course, unless you support law enforcement agencies doing unlawful actions to get convictions.
Isn't the parallel construction narrative that it doesn't matter how you got the information as long as after you get it, you can show a way that you could have gotten it?
Even if the method used was illegal, and found to be illegal in court, the evidence is still admissible iirc?
Ever read Neal Stephenson's Cryptonomicon? The WW2 part shows a team going through elaborate measures to create a plausible way that the allies can find out what the Germans are up to without revealing that they can read all of their messages. They would tell a submarine to surface at a particular location at a particular time and report what they see, for instance, and the sub crew would have no idea why, to produce a plausible explanation of why some German action was discovered.
Parallel construction often means they hide how they got the original information from the court and from the defense.
No, that’s not how it works at all. You use illegally gained information to find other avenues to get evidence that on the surface look ok.
For example, you use illegally gained access to messages to find out about a meeting at a particular time. Then when the meeting to exchange contraband is happening, “a concerned anonymous citizen” calls in a tip of suspicious behavior and a patrol cop stumbled onto a bust.
The evidence is not admissible. It is considered 'fruit of the poisoned tree'.
Parallel construction only works if you can hide the illegal investigation from the court.
the Court has never ruled on parallel construction. I think it's probably illegal. there was Harding v. United States, but that was a case where someone was accidentally flagged as having an outstanding warrant. intentionally passing illegally acquired tips is probably illegal, the trick is it's impossible to prove and there's no penalty other than getting evidence derived from the tip stricken from the record.
It doesn’t happen “all the time”. The term also applies, and is mainly used, to disguise lawful sources, such as undercover agents.
While there is a problem with US police acting unlawfully, it mostly happens in specific situations. At the federal level, they are much better behaved. And the incentive structure just doesn’t make it worthwhile to break the law
Getting information through an illegal trawl, is an amazingly effective way of working out how to get related information "legally".
Find out from the phone, that they have an appointment at a particular time and place? It's easy to just be there and photograph them, "as part of occasional surveilance" or whatever.
its trivial for well-funded organizations to get around such legal issues when they use something called “parallel construction”
this is when evidence is collected in nefarious and often illegal ways. it is then given to the organization which will weaponize the information. this organization then launders how they acquired the evidence, obscuring the shady way it was originally obtained.
there is no shortage of instances where different groups (including local police) have laundered how evidence was obtained to get around legality requirements for obtaining evidence.. [various links below]
as the above commenter highlights, it’s about to get even more terrifying as incredibly well funded, incredibly authoritarian groups jump into the fray using religion as their excuse.
There are LITERALLY abortion bounty hunters in Texas, who earn money by hounding women seeking abortions and turning them in for profit. I cannot believe the state of this country.
Lockdown Mode basically cripples the phone, feature-wise. It's not quite to the point where I'd (even hyperbolically) say "why don't you just get an old dumb phone instead", but still...
The right thing to do would be to redesign the system from the bottom up to actually be secure in the face of vulnerabilities in any of these features that get disabled because they can be dangerous for people. (And maybe Apple is working on this behind the scenes, which will take them years to complete.)
But, agreed: let's not let perfect be the enemy of the good. It's better to have this option than to not have it, even though it likely creates a super restricted user experience that probably isn't particularly pleasant to use.
> Lockdown Mode basically cripples the phone, feature-wise. It's not quite to the point where I'd (even hyperbolically) say "why don't you just get an old dumb phone instead", but still...
The problem is that phones (of the "dumb"/"feature" variety) are running OSes that don't have nearly the security attention or hardware features related to them as iOS devices.
I carry a KaiOS feature phone as my personal phone (when I remember it). Apple pissed me off enough with the CSAM stuff that I wanted to experiment with alternatives, and I've done so. However, I don't pretend KaiOS is particular "hard" against attackers - it's almost certainly not. But neither does it have much of an attack surface. It doesn't even try to render emoji, they're just black rectangles. And neither does it try to, say, render weird old Xerox image formats.
I would trust an iOS device with "most of the complex attack surfaces turned off" far more than I'd trust a KaiOS or stripped Android device. You get all the hardware protections, regular OS updates, a bug bounty program focused on this mode, and the smaller attack surface window of Lockdown.
I'm incredibly excited by it, because it turns off all the stuff I don't want in a phone anyway.
Unfortunately, "crickets on CSAM" is a problem too. If they say they're not going to ship that ill conceived feature, I might move back to iOS. If not, well... I'll probably play with Lockdown mode for a week or two and then go back to the Flip.
If you opt out of/disable iCloud iPhoto Library then CSAM isn't active right? - It applies to iMessages only because iMessages integrates to iPhoto Library.
Again, the CSAM "scandal" was actually an improvement of what the other online photo services do (constantly scan your entire library of photos with no controls in place). Just the improvement involved on-device scanning that folks seem allergic to. But you can opt-out, so still better than KaiOS.
The claim is that if you opt out, it's disabled, yes. However, I object, fundamentally, to the entire concept of using my device to check my content for your legal requirements.
If I store content on your server, yes, absolutely, you can use your resources to check the stuff I've stored for what you define as badness.
But Apple's system is using my device to scan for their definition of badness. If they'd then said, "And this allows us to do iCloud E2EE," well, OK, this is a discussion to have. Except they didn't and haven't. It is, as designed, "I use my device to scan stuff for you, and then you can still scan it."
And as a direct result, the EU is now pushing for "badness scanning" in all sorts of E2EE channels, to include searching for "grooming" in text chats. "But Apple said they could do it! Why can't you do the same thing?" is a valid argument from a politician's point of view.
KaiOS doesn't have anything in the way of photo uploading in the first place.
But the scanning is only applied to photos being stored in the cloud. What difference does it make which piece of metal is doing the actual scanning if the practical result is the same?
Well, putting aside that CSAM isn't active at all at the moment, you're correct it didn't apply to iMessage (sending an image in iMessage couldn't trigger it unless the user saved the image), and that iCloud Photo Library needed to be on.
I agree, this mode seems like something most people could manage without difficulty. Amusingly, my (non-Apple) system is much more locked down than this and isn't exactly unusuable IMO, although some things might be harder to manage on a phone.
Microsoft did some tests for Edgium's "Super Duper Secure Mode" and found that disabling JIT improves real world performance more often than it makes it worse (and usually makes no difference):
Diabling JIT makes it possible to enable some additional exploit mitigation methods. A follow up article mentioned that few people who tried it noticed a difference:
"It is worth mentioning that when we originally had this idea, we doubted our Microsoft Edge peers would even consider it. We quietly made changes to our browser without explicitly telling them the specifics and then asked them weeks later to see if they noticed the change. They would always say no, and only then would we inform them that we disabled the JIT. After surprising multiple developers in Microsoft Edge, we got the support needed to try this experiment. One can’t help but wonder what other well established assumptions about users and the web we should reconsider."
Yea I am a developer but when you put it like that I would consider getting a phone for debugging and adding lockdown to my personal phone. I’m not Jeff Bezos nor do I intend to be but at least I would like to support this and see what the experience looks like.
Yup, there is little downside to supporting this concept as it should inevitably move others to adopt similar functionality. It isn’t the perfect solution or likely even the best idea in the room but the biggest player just made a big move in the consumer’s favor. It does coincide with a big privacy push that will keep their market share up so not really benevolence.
Same, I'm planning on running this on a spare iPhone, looking into whether I'd be OK running it on my daily carry phone.
I consider myself "recreationally paranoid", I enjoy locking my stuff down for fun, not because I ever think anyone's gonna burn an NSO zero day to get into my stuff.
Note that this doesn’t disable MDM entirely: it disables adding new MDM profiles after its enabled (I’d hope there’s a “do you trust the existing one with your life?” prompt…) which seems like a reasonable compromise for preventing spear phishing to install new profiles.
> The right thing to do would be to redesign the system from the bottom up to actually be secure in the face of vulnerabilities
i understand the impulse to immediately question if this might solve security, but it just won’t. there are some classes of known vulnerabilities which it may mitigate, but at best it would be a temporary security solution.
security is hard.
we also need to remember that we would, with almost 100% certainty reintroduce long forgotten about mitigations that someone silently did years ago but they didn’t make a big deal over. or even mitigations which were made a big deal of, but they were a decade ago therefor long forgotten about.
we have a tendency to think those who built complex systems before us were unenlightened, or lazy, or primitive. this often really isn’t the case.
anyone who has worked on large projects will inevitably learn the hard way that scale adds incredible fractal depths of complexities that we can’t dream of until it slaps us in the face. so we put out that fire, do not-nearly-enough-documenting on why or what caused it so future people might avoid the same mistake, and then we continue running up the hill.
security is hard.
and of course sometimes a from-scratch-rebuild might make sense but we’d be looking at years and years of relearning mistakes which were previously learned and corrected for.
I'm not so sure. They _could_ start from scratch, building the same feature set but paying extra attention to security in every possible aspect of design, do you now have a system guaranteed to be free of malware? Never. We are talking nation state actors here. Doing that would ignore the really great (if merely suggested) admittance that apple is making here: reducing the attack surface is the only way to mitigate the unknown unknowns of software security.
Another reason "from scratch" is just crazy talk: If they start from scratch it means literally redoing all the libraries. The thought of apple redoing parsers for PDF, JPEG, BMP, PNG, XML, etc, doesn't really invoke confidence for me. They probably create more bugs than the working library has. Unless they do it in rust but even then some bugs inevitably remain.
If the state is after you, even low-level state actors, all it takes is a court order or subpoena to compel any of the parties involved with your phone or data to hand over your data or start collecting it.
If your threat model includes any level of the US government, and that includes women seeking abortions in states where it is illegal, you cannot rely on US-based company's tech to protect you from the law.
There are state actors other than the US Government, along with plenty of non-state actors who are willing to use illegal techniques on occasion, and this does increase people's protection against those actors.
If you're in a developing country and you engage in activism against some questionable project by the state owned mining company, you're probably not going to get the full force of the NSA directed against you. But your country's domestic intelligence agency may be interested, and they probably only have off the shelf spyware to work with.
Pretty sure most things are stored encrypted/delivered encrypted only to be decrypted and rendered on your phone. Meaning Apple/your provider have nothing to give up for the hypothetical US government demand.
To add to the other comment, Apple installed on-device scanning to iOS as far back as version 14.3 (https://pocketnow.com/neuralhash-code-found-in-ios-14-3-appl...). They claim they won't activate it without a court or government order, but these are becoming easier and easier to obtain. Under the Patriot Act, virtually anyone's electronic devices may be searched for any reason. In effect this means that Apple has access to all information on all iOS devices, and the government may access any of these at will.
“iCloud Data Recovery Service
If you forget your password or device passcode, iCloud Data Recovery Service can help you decrypt your data so you can regain access to your photos, notes, documents, device backups, and more. Data types that are protected by end-to-end encryption—such as your Keychain, Messages, Screen Time, and Health data—are not accessible via iCloud Data Recovery Service. Your device passcodes, which only you know, are required to decrypt and access them. Only you can access this information, and only on devices where you're signed in to iCloud.”
> There's a huge gap between Fortune 500 executives, government officials, etc. and regular people in terms of the resources available to them to prevent state-sponsored attackers. It doesn't take much these days to go from a nobody to being on somebody's radar.
It's also a question of whether you want that. Anyone can take anti-phishing training, it just takes a lot of time. Want to download a mod for a game? You better have a separate gaming machine with no important data on it and, to be sure, in a separate network. Want to buy a phone? Better drive to a random store, ordering is to dangerous.
Sure, it's easy to get on the radar, but avoiding a state-sponsored hack is also a lot of effort. Fortune 500 executives need to put that effort in and they do have the money to make it happen, but for most people, the problem is not the cost.
I wonder, why doesn't Apple (and MS, Google, ...) throw all their weight into the ring and lobby for making selling exploits commerically a crime? It should be up there with counterfeighting money or selling nuclear secrets. NSO Group should be on sanction lists. Politicians should be ranting about how dangerous it is that foreign companies and countries can spy on US citizens (instead of what they are usually ranting about).
You could wake up one morning, and every billboard in Washington, every newspaper will have ads for this issue. Every representative would be followed around by lobbyists. And Apple could pay it from their coffee money.
Now, I get why we don't crack down harder on selling exploits. First, intelligence agencies love NOBUS (No one but us) exploits and believe something like this exists. Second, it is convenient because sometimes foreign intelligence agencies are used to spy where domesitc agencies are not allowed to; and third the US could probably do little (officially) against companies, say, in Israel.
But this is totally the kind of issue that you could escalate into a bipartisan national security thing. And it would be an incredible marketing, and security win if Apple could push any stricter legislation in that direction.
"I wonder, why doesn't Apple (and MS, Google, ...) throw all their weight into the ring and lobby for making selling exploits commerically a crime?"
I would be strongly, strongly opposed to this.
It is a clear-cut free speech / first amendment issue.
If you don't believe me, just imagine yourself describing the pseudocode of an exploit to someone over the phone - or sketching out the details of a vulnerability in a short note.
I believe we won't get to this place because we have the first amendment but I would really love to not waste ten years fighting about it ...
I would actually say morally it is clear cut in the opposite direction. Imagine you hack into a company or a government computer and steal secrets. That is clearly illegal.
Now imagine you figure out how to do the hack, do all the preparation, and sell it ready to use to somebody. And they are open about the fact that they are selling it to foreign powers. This should definitely be illegal, too. In the physical world, you also probably shouldn't be able to go around and sell instructions how to break into cars or houses.
> If you don't believe me, just imagine yourself describing the pseudocode of an exploit to someone over the phone - or sketching out the details of a vulnerability in a short note.
I don't see how any of this would be effected. You could still do hacking, security research, you could get bug bounties, report bugs to the vendor, the government, or even disclose them to the public. You just shouldn't be allowed to sell that kind of information to a third party.
There are many laws like that right now. In the case of insider trading, you are not allowed to share certain nonpublic information against some benefit with others.
One of my favorite youtube channels is Lockpicking Lawyer. He shows how to break into all kinds of things by defeating physical security. The videos are "free" but of course he's making money off ad views, sponsors etc like any youtuber.
> Now imagine you figure out how to do the hack, do all the preparation, and sell it ready to use to somebody.
Now imagine you notify the vendor that they have a grave security flaw in their product. They could totally turn you in to the police and the PoC is sufficient to consider you guilty. You won't be able to prove yourself innocent without a long, expensive and life-destroying legal battle.
It would have a massive chilling effect on everything else instead of what you originally intended.
Lol this is a whole lotta faith based on nothing. Sorry bud Aussie laws gonna puck you here. Your Apple device can be backdoored curtosy of aus laws and apples not allowed to inform you it's happened. If you think lockdown mode gonna prevent this your 100% dreaming. Much lulz y'all should just put less data on your phone if your concerned with others knowing that data.
Absolutely the Australian government have put in some questionable (bipartisan) security laws in the last few years, and Apple _may_ comply with a request under them (even though it famously hasn't many times).
However the Australian government attacking you specifically isn't the only problem this solves.
> Even for a company the size of Apple, putting up $10 million to fund organizations that investigate, expose, and prevent highly targeted cyberattacks isn't pocket change.
is kind of funny, as it’s about 1/20000 of their total cash reserves. With 20000 in my savings account, it’d be equivalent to giving 1 dollar to charity. In other words, pocket change :)
You could easily get more for selling a zero-day likely this than reporting it to Apple. If you combined the risk this is being turned on is reported back to Apple or remotely detectable, combined with a zero day, it would be a goldmine; cover this and other issues in my comments on the topic:
I like money but something tells me targets of such attacks might end up dead, so it’s more about ethical considerations rather than who pays better. The bounty won’t sway everyone but $2m would sway more people than $1m which would be more than $10k
> If you're a woman seeking an abortion in a state where it's illegal or severely restricted, you could be the target of malware from your local or state government or law enforcement.
Let's not get in above our heads, here: if the US government wants to know what's on your iPhone, they still have the faculties to retrieve that information. Setting your iPhone in a lockdown mode isn't going to let you escape the purview of government surveillance, and if it did then Apple wouldn't be announcing it today. We're all targets of government malware, and the way they ensure we all keep it installed is simple: they just make Apple and Google write it for them. This pervasive idea that Apple is somehow escaping the jurisdiction of PRISM is pretty hysterical, and it makes me excited for the first Senators to get caught paying for prostitution services with Apple Pay inside Lockdown Mode. The only enemy of "good" in a threat model is the unknown, and Apple makes sure there's plenty of unknown factors in your iPhone.
Edit: For all HN loves to rant about the Halloween Documents, you lot seem awfully unfamiliar with the Snowden leaks...
I kind of want to turn it on and leave it on. I'm assuming since it's a "mode" that I can turn it off when I need to, do what I know is legit, then turn back on again.
The thing with Lockdown Mode is that it shifts the trade-off between functionality and security significantly away from functionality. This is an acceptable side-effect of intentionally disabling attack surface that isn't strictly required to have a useful phone. On the other hand, it also makes most social time wasting stuff not work, which is what the masses mostly use their phone for anyway.
This really is a mode designed for those who really desperately need it, and it really is implemented in a strong enough way to be useful (hardware root of trust, no-drive by changes since it requires a reboot with a wiped key bag cache so you must reauthenticate in order to change it). But all of that for consumer-attainable pricing. It doesn't have to be perfect and I'm sure in due time there will be jailbreak-esque attacks. But until then, this is effectively a very high barrier for an attacker that lacks the resources of a nation state (or a smart but bored teenager in a basement these days).
> On the other hand, it also makes most social time wasting stuff not work, which is what the masses mostly use their phone for anyway.
Got any info/links explaining that? Having only read Apple's webpage, it sounds to me like the major problem is slowed down javascript execution? I certainly didn't;t get the impression it's going to shut down all social media apps/websites?
> It doesn't have to be perfect and I'm sure in due time there will be jailbreak-esque attacks
No protection is perfect, and this kind of things are always another layer in a defence-in-depth approach. Just like car locks, the idea is that it becomes enough of a hurdle that someone on a fishing expedition will go look elsewhere. Of course it won't be enough for a determined state actor.
I would assume that disabling Lockdown Mode means wiping the phone to factory condition. Otherwise Lockdown Mode is only as secure as whatever PIN or password you use to disable it, which isn't particularly secure at all.
Yes, but if an attacker has physical access and unlimited time, you've probably lost anyway.
What this seems to be focused on are the "remote zero-click/one-click" vulnerabilities we've seen, in which either a message is delivered that never shows up but installs a backdoor hook, or a website can deliver a malware package to a particular user and install the backdoor hook without notifications.
It sounds like it does improve some of the physical security features, which should help reduce attack surface, but I wouldn't trust any bit of consumer electronics against a sustained physical attack by a sufficiently motivated adversary.
Sounds to me like it's targeting all the zero and one click exploits we've heard about over the last few years. Not having SMS/iMessage download and "parse" random files/formats and tightening up Javascript attack surface to not include JIT optimisations would probably have helped Jamal Kashoggi and his friends/contacts.
Even with this, there's not very much you can do against a state level actor who had physical control of your device and you, and a $5 wrench. Even without having you and being prepared to use violence, a sufficiently motivated state actor will probably get into your device anyway - Apple didn't6 cave to a judge when the FBI wanted them to break every iPhone user's security to get into the San Bernadino shooter's phone, but they didn't get to set a precedent there because someone else broke into that phone for the FBI anyway and they dropped that case...
If you're in the habit of worrying about persistent malware on your device, "regular restarts" are one of the best things you can do.
Much of the low interaction malware is only persistent in memory, so a reboot will clear it until they get their claws back into you. Depending on what the attack path is, that may take some while - and using those attacks is still somewhat risky. "Having to re-pwn a phone every 6 hours" is a lot more risky to an attacker than "someone who never reboots their phone and never updates it."
Yes. Also, regularly doing a factory reset is another good hygiene habit to have, this will clear the more rare but persistent forms of malware, often brought on board by legitimate software you installed a long time ago but no longer use.
are these US companies not legally obligated through some clandestine patriot act style laws to enable backdoors - and to deny the existence of these at any cost.
> At the end of the day, this is all good news for user privacy and security going forward.
What can i say ? Good luck then with your "privacy and security going forward". And remember later, when they knock at your door, that it was for your's and (mostly) their security.
At the point it puts users at more risk that not, I don’t see this as a step forward; not informing users of the risk of having iCloud enabled is one example.
Then it’s $200K after taxes. Though now we are discounting the many things Apple can write off and that people worth 8 figures and up can write off and get away with, with a $300K income person or $200K after taxes. It wouldn’t be the same net income regardless.
Edit: this is all moot since the amount would be $80 or $120. If someone happens to think that’s too much of a stretch to call pocket change for a $300K income, then the rest of my comment still applies.
At $30000 income where someone else is making the money to pay for expenses, it’s like $10. That’s pocket change.
Or the profit someone has after paying for rent (isn’t this equivalent to paying rent for their own stores and office buildings?), and other things that companies write off before they calculate profit or net income, a $300K income, might have equivalent profit be $90K. That’s under $50. Change it to someone making $75K income and you’re at $1.
Neither of my comparisons are properly analogus but neither is yours. Comparing a company that is doing billions and billions in profits with the income of an average upper middle class person is as incorrect as comparing what is considered negligible money between $300K income and full time minimum wage. The latter person likely has no money left over, maybe goes into a bit of debt each year.
Let us not gloss over the fact that in China Apple willingly handed over their HSMs to the CCP granting them full control of Apple devices there, even if it means aiding in Uyghur genocide.
When it comes down to money, or protecting the freedom or privacy of users, they will choose money. In this case the money is in good PR to help them secure more government contracts. They are playing all sides.
I do not feel anyone that needs high freedom, security, and privacy is well served by proprietary walled gardens. Particularly those that only grant holes in the walls to corrupt state actors.
> If you're a woman seeking an abortion in a state where it's illegal or severely restricted, you could be the target of malware from your local or state government or law enforcement. In Texas, you can sue anyone who aids and abets a woman who attempts to get an abortion for $10,000, which is enough to get someone to trick someone into installing malware on a phone.
We're ranting around about states doing explicitly illegal things for no reason. These things will never be done. They'd be thrown out by the courts the instant they were attempted.
The disconnect here is that Apple already monopolizes the devices, the service, and the application distribution platform. Now, they're expecting you to be satisfied with them monopolizing the security controls and monitoring on your phone.
We expect so little of our phones with respect to our desktops when we know full well there's no legitimate reason to do so. Particularly now, if you're imagining that one needs security against state level actors.. then the notion that a single vendor is required to simplify the ecosystem and broaden adoption is directly in conflict with this future you have declared we are now in. It's literally the weakest possible model of defense available.
This isn't the perfect being the enemy of the good.. this is Apple monopolizing yet another aspect of the platform for themselves at the cost of true innovation.
> Now, they're expecting you to be satisfied with them monopolizing the security controls and monitoring on your phone.
What is the alternative, though? That each user figures out for themselves how what their security risks are, cobble together various security-focused apps, stays up to date with new developments, etc.?
Think about how that’s worked out in the desktop security or VPN markets: there’s a long history of outright scams, a bunch of companies which made their software worse (crammed with ads, etc.) or left their users less secure over time, and the remaining products are for most people completely interchangeable.
The average person has no meaningful way to distinguish between any those. They all claim to be great, auditing is expensive and difficult, and most people are going to get recommendations from people they incorrectly think are experts (shoutout to the websites I had to migrate/secure after someone’s “tech guy” picked GoDaddy for the bikini pictures). Even enterprise security software tends to be long on snake oil despite theoretically more knowledgeable buyers & budgets for auditing.
I think there is a solid argument that this space is not a naturally well-functioning market and is probably better with a few regulated players, similar to how we decided that the patent medicine market wasn’t good (and, yes, the regulatory failures are an important cautionary point!). People are literally staking their lives on something which has to be better than some SEO-d rathole.
And yet, we do no such thing when it comes to home and property security, financial security or medical records security. So, why when it comes to a phone which clearly has less overall value than these items, is it suddenly necessary to throw in the towel and allow an unnatural monopoly to form?
You're describing an unregulated market where the FTC and DOJ didn't seem particularly interested in policing. I would suggest that's a bigger reason for the state of the market then thinking it's a natural phenomenon endemic to this particular case.
And finally.. the giant disconnect here is that "you should worry about state level actors" but "you're too unsophisticated to do anything other than beg Apple for help." Mostly, I was trying to point out the absurdity of this position while at the same time taking a dig at Apple for their "cute friendly monopoly" tactics.
Does your phone company let you configure their spam filter? Do your medical providers let you secure their EMR systems? It sure looks like there is precedent for regulating companies to require them to provide secure services.
> Mostly, I was trying to point out the absurdity of this position while at the same time taking a dig at Apple for their "cute friendly monopoly" tactics.
Yes, and you let the desire for a quick jibe lead to oversimplification. The level of access which is needed to implement things like this also allows very powerful attacks. It’s not unsophisticated but realistic to recognize that allowing that level of access would have some benefits but would also reliably produce a large number of victims who trusted the wrong vendor. Reducing the number of parties who have to get it right to keep you secure has a significant benefit, especially if you’re familiar with the long history of companies which were acting in bad faith or compromised.
> And yet, we do no such thing when it comes to home and property security, financial security or medical records security. So, why when it comes to a phone which clearly has less overall value than these items, is it suddenly necessary to throw in the towel and allow an unnatural monopoly to form?
I think that there's a practical reason. For all your examples, the companies operating the solutions can be held to US laws and regulations. But purchasing (or downloading for free!) software from anywhere in the world cannot be regulated effectively (at all?).
So as a consumer, there is base level trust I have in companies providing me home & property security, financial security, and medical records security because they can be constrained by US laws & regulations, such as minimum standards. Not so for random software that I download for free or buy from some overseas (or basement somewhere in the US) location.
It's also a handy way to keep their stranglehold on iOS web browsers, forcing all to use webkit. How exactly they turn off JIT compiling and allow any javascript to run at all, I don't really understand, and I don't know what vulnerabilities they must be aware of in Safari's engine that could lead to unsandboxed code execution (although thinking about it, this seems to prove they're aware of something inherently unsafe there). But if their claim is along the lines that all JIT compilers are vulnerable, that's a strong case for never allowing V8 or any other engine in the app store.
But if their claim is along the lines that all JIT compilers are vulnerable, that's a strong case for never allowing V8 or any other engine in the app store.
I’m okay with this; I’ve always felt that dealing with the security issues of 3rd party rendering engines and JavaScript implementations is a valid reason to not allow them on iOS.
Since Apple is the platform vendor, at the end of the day, if there’s a vulnerability, it’s their responsibility, even if (in a hypothetical future) it’s Google’s or Mozilla’s JIT that allowed the the malware to be installed on a user's device.
Of course, since all browsers on iOS use WebKit and JavaScript Core, they all get Lockdown protection for free.
This lockdown mode means they can support those other browsers in a non-lockdown mode. All they have to do is have lockdown mode disable all non-webkit browsers.
> You know what people do when they're targeted by state actors? They don't use computers. And if they have to, they air gap.
That's like saying "men who don't have easy access to condoms just stay abstinent instead". This is what we wish would happen. But empirically, they just shrug and do the insecure thing.
(There was an article posted on HN a few years ago that was from a journalist pointing out this exact thing, from his personal experience. I can't find it though.)
Ok. You’re in the Republic of Somethingistan. You’re alone. All you have is your phone to contact people at home to help you and some money and you need to get out.
You know the state is after you.
So you ignore this, turn off your phone instead, and… what? Now you’re even more alone, can’t get help from friends/family.
This seems like a very reasonable option in some situations.
It seems like there could be a median area between "in the crosshairs of the KGB" and "I need to avoid off-the-shelf exploits in a specific situation."
A great example of this might be visiting a country like China while on business. Straight up going "off the grid" isn't really an option in that scenario.
> A great example of this might be visiting a country like China while on business. Straight up going "off the grid" isn't really an option in that scenario.
Most corporations who know what they are doing (and some who don’t) send their execs with burner devices when traveling to certain countries on business trips.
And what software will that burner or otherwise locked down phone run?
It's not going to be a flip phone, it's going to be a iOS or Android device specially provisioned by the company's IT department for use in environments like these.
You can't get anything done on a flip phone, you can barely operate in China without WeChat/AliPay.
It wouldn't be very difficult to provision an iOS device with limited connectivity to proprietary information while still maintaining necessary operational communication and productivity. The idea here isn't to just flip Lockdown Mode on and pray that all the secret stuff on your phone doesn't get hacked, the idea is to use it as one tool of many to reduce your blast radius.
You realise users who sit on air gapped networks generally have a secondary device that connects to the public network. To you think the Elon airgaps his mobile?*
*maybe he has a team that audit comms for malicious activity and payloads, but not everybody is as well resourced so the point still stands
Let’s not let better be the enemy of good either. Better than terrible is still bad and is nowhere near good.
It is frankly ridiculous that anybody should believe Apple when they claim to provide even minimal resistance to well-funded determined attackers. Protecting against well-funded determined attackers has been the holy grail of software security since forever and everybody in software security at least claims to be working toward that. Despite that, the prevailing state of “best-in-class” “best-practices” commercial software security is objectively terrible including Apple circa 1 year ago.
Are we supposed to believe that Apple, despite abject failure over the last few decades until as recently as the last time they announced security updates to the iPhone, has finally this time, for sure, pinky swear its true, jumped from terrible to the holy grail, or even good, because they said so?
No, this is absolute, utter, unequivocal garbage. Their claims are completely unsupported and they should be excoriated for spewing unsubstantiated bullshit that muddies the waters of the actual state of software security and misleads people into believing they are getting a meaningful degree of protection or software security.
If they want to make such claims, they should put their money where there mouth is and, instead of certifying iOS to EAL1+ and AVA_VAN.1 as they currently do, they should certify it in “Lockdown Mode” to EAL6-7 and AVA_VAN.5 which actually does certify protection against “high attack potential” attackers such as large organized crime and state-sponsored attackers. At the very least they could certify it to EAL5 and AVA_VAN.4 which certifies protection against “moderate attack potential” attackers. Until they do that, their claims to protect against state-sponsored attackers are complete unverifiable bullshit.
First off, calm down. This feature came out today. It's not really clear yet how well it will fare. Second, this feature is a step in the direction of Apple accepting that defending against a well-funded attacker is difficult when providing general-purpose software, so this is still a step in the right direction.
It came out today, which means it should be assumed insecure against state sponsored actors until proven otherwise with overwhelming evidence, not we should give it the benefit of the doubt because maybe they really did it the 57th time after 56 total failures.
For that matter, it is not like they could not provide such evidence even though it came out today. It has presumably been in development for some time, so if they did actually provide verifiable protection against state sponsored attackers they could just release their formal proofs of security to that effect and be done with it or at least preliminary certification evidence demonstrating protection against high attack potential attackers as outlined in the international Common Criteria standard via AVA_VAN.5.
iOS is already certified according to Common Criteria as their only advertised security certification, just at the lowest possible level, and it already has a certification for high attack potential attackers, so doing this would be consistent with their existing certification regime and provide clear evidence supporting their claims.
Absent that, I see no independent verifiable evidence of any of their claims, endless precedent to dispute their claims, and not even a token effort to provide even a sliver of objective backing for their claims.
So why should I or anybody else reject the standard wisdom of “you are screwed if state sponsored attackers are interested in you and there is no product that can help you” and instead believe Apple’s marketing that they can?
It came out today, which means it should be assumed insecure against state sponsored actors…
What was announced today is the first version of a feature in a beta version of an operating system that won’t be released for at least 2 months from now. Chill.
I’m sure there will be the requisite white paper, statements from security experts, verification from industry groups, presentations at security conferences, etc.
In the meanwhile, from what little we know now, it seems to be heading in the right direction.
Okay, point me to a single white paper or certification that can demonstrably, reliably differentiate between
products that can protect against state-sponsored attackers and products that can not, and show any Apple product that has been verified against that standard to protect against state-sponsored attackers.
I will start by pointing out such a standard, the Common Criteria, which can reliably reject systems that can not protect against state-sponsored attackers as systems such as Windows have never been able to achieve even protection against moderately skilled attackers, which is a fair assessment. Under that standard, which iOS and all other Apple products are already certified to, Apple has never once been able to achieve protection against moderately skilled attackers let alone highly skilled attackers. In fact, that very same standard declares from empirical evidence gathered over decades that it is infeasible to retrofit a system that can not protect against moderately skilled attackers to ever become able to protect against moderately skilled attackers or above.
For reference, one way of demonstrating protection against highly skilled attackers according to the Common Criteria is to subject the systems to a penetration test by the NSA with full access to source code with successful penetration constituting a failure. That is a reference point for what protecting against a state-sponsored actor looks like according to the standard.
Security is not black-and-white, it's shades of gray. This feature aims to make exploitation harder. Formal proofs and certifications are nice but what I just said remains true even in the face of such things. iOS is regularly tested in the real world against highly resourceful attackers, and the results there are far more indicative of how well its security fares than anything else could be.
> it should be assumed insecure against state sponsored actors until proven otherwise with overwhelming evidence
Everything is always insecure. Like in toxicology, it's a matter of degree.
If you're really facing state-sponsored actors, you shouldn't be using an iPhone. You probably shouldn't be using a mobile phone. But that isn't a tradeoff most people are willing to make.
Lockdown Mode existing is unequivocally better than it not. Those who would have air gapped aren't going to be tricked into using Lockdown Mode instead. Instead, those who would have reluctantly used their iPhones in normal mode and e.g. turned off location tracking will now be better protected.
Yes, and like in toxicology it matters very little if instead of injecting a spoonful of botulism you instead inject a spoonful of less dangerous anthrax. Matters of degree still care about orders of magnitude and bright lines defining fitness for purpose.
Lockdown Mode is being advertised as protecting against state-sponsored actors: “Lockdown Mode offers an extreme, optional level of security for the very few users who, because of who they are or what they do, may be personally targeted by some of the most sophisticated digital threats, such as those from NSO Group”. They are attempting to convince people who would otherwise air gap to avoid being killed that their systems are perfectly adequate. Their systems are on the order of 100x worse than what it necessary to protect against state-sponsored actors. It is not acceptable to attempt to conflate the two just because everything is a shade of gray; one is off-white and the other is off-black, they are not even remotely similar.
Apple’s advertising of Lockdown Mode is unequivocally worse for the stated use case than not having it at all since then at the very least people at risk would not be mislead into thinking Apple can protect them. If they want to change their advertising to clearly indicate that it should not be used if you are at risk of state-sponsored attacks and that there is no independent verification for any of their claims, then I would agree with you, but they are not doing that. Until they do, they should be censured for making such irresponsible and reckless claims that mislead at-risk individuals from taking proper precautions.
I am so excited about this news. I understand that some people are pessimistic, and view it as a "giving up" on complete security against nation-states. I think that's the wrong way to analyze the situation.
The dream I have is someone making a phone that is purpose-built to be secure against state actors. Unfortunately, this makes very little economic sense, and probably won't happen (maybe if some rich person started a foundation or something?). The phone would need to have pretty restricted functionality and would not be generally appealing to mass market consumers.
As it stands, securing a mass market modern smartphone, even from just remote attacks, is just intractable. We should not bury our heads in the sand and wishfully think that if they just spend a little more money, close a few more bugs, and make the sandboxing a little better, somehow iOS 16 or Android 13 will finally be completely secure against state actors. The set of features being shipped will grow fast enough that security mitigations will not someday 'catch up'.
This is the next best thing! The more we can give users the freedom to lock down their devices, the more the vision of an actual solution comes into view. This is the first step towards perhaps our only hope of solving this someday - applying formal methods and lots of public scrutiny to a small 'trusted code base', and finally telling NSO group to fuck off.
Even this dream may not pan out, but at least we can have hope.
I would suspect any phone designed to resist a state-level actor, that is made available to me (a regular citizen) would 100% be a honeypot for a state level actor.
This comment feels disingenuous to me, but maybe I'm misinterpreting. Security features are always a service but there are real apps that provide real security. Signal and Matrix provide real encryption for communication. There's even mainstream products that do, like iMessage or Gmail, though these tend to be more selective about what is secure and what isn't (typically through walled gardens). Apple and Google both use federated learning, which is at least a step better than your typically data "anonymization." I agree that there's not enough push for serious security, especially as a default, but I also am not pessimistic on the subject either.
Signal wants your PSTN ID = real world ID, wants contacts from your phonebook which on Google phones generally means already cloudified, and is itself distributed through Google Play. Further, IIRC it's US-based so subject to acts of intervention from on high. I would be strongly suspicious of any metadata security claims, even if it nominally provides message or session-level encryption. Metadata is bad news.
> IIRC it's US-based so subject to acts of intervention from on high.
Sure, and they have been open about what information they give. If you're talking about being forced to introduce compromised code, well I'm not aware of the US government being able to force a company to do that. Signal has said before they'll shut down and then move if this is a requirement and on top of that[1], the code is open sourced and constantly scrutinized by the security community. So sounds like a pretty difficult thing to pull off.
I don't think handing your phone number to Signal is as big of a security issue as you're making it out to be.
I have a ton of concerns with Signal. They started collecting and storing user data in the cloud while being deceptive/unclear about it in their communications leading to a ton of confusion with users. In fact they're now storing exactly the same data that they've bragged about not being able to turn over since at that time they weren't keeping it. Pretty much as soon as it was clear Signal was going to start keeping user data, users started with objections and asking for a way to opt out of the data collection and bringing up security concerns but those objections were ignored.
To this day they're violating their own privacy policy because after they started storing user data in the cloud they never bothered to update the policy.
Currently it states: "Signal is designed to never collect or store any sensitive information." while in practice they store your name, your photo, your phone number, and a list of everyone you're in contact with which is pretty damn sensitive, especially if you're an activist or a whistleblower.
I've stopped using/recommending it. To this day I run into posts where people think Signal isn't collecting any user data. I hope every user who has to learn what signal is really collecting from some random internet comment thinks long and hard about what that says about how transparent and trustworthy signal is.
I'll give Session a look! Right now I'm using silence for unsecured texting and Jami for secure communication, but both lack polish and going from signal to silence was rough. It really needs a search function.
Anyone not following all the drama at the time wouldn't have a clue, and a bunch of people who did still came away with incorrect information anyway because Signal didn't make it clear at all what they were doing and they've gone out of their way to avoid answering direct questions in a clear way ever since, instead keeping the myth that they don't collect user data alive.
There's no reason they couldn't have provided a simple opt out for the data collection and avoided the issue entirely and the fact that they wouldn't do that was red flag enough, but the mess of confusion their communications caused and their refusal to update their privacy policy should be all the evidence we need that they're not to be trusted. To be fair to the folks at Signal, they may actually be trying to communicate that very message to their users as loudly as they're legally able to.
The whole cloud data collection, and the fact that their privacy policy is now veritably incorrect for over 2 years now certainly makes it plausible there's more they're keeping away from us.
Sure. Aside from the Google phones upload contacts to cloud issue, and the encouraging contacts to be added thing, there are two clear problems: both metadata.
(1) It's the network of phone numbers - who knows who, when they added, that starts to draw a picture.
(2) If they have any infrastructure at all - update checks, contact additions, whatever, that is going to phone home or be polled or contacted whatsoever, particularly that which can facilitate a network response (generate network traffic when an ID is added) then the app effectively acts as an element that can be used for identity verification even if all traffic is encrypted. This is not a small issue.
These issues are not unique to Signal, but they should not be swept under the rug. FWIW I do not claim to have read or audited their code, I just feel the use of PSTN IDs (== highly available link to personal identification) is a total farce which introduces huge risk for nearly no benefit to users and is fundamentally incompatible with their nominal public stated goals (again haven't read the official text) of end user security if that security is supposed to be best-effort.
> Sure. Aside from the Google phones upload contacts to cloud issue
You can add contacts through Signal that aren't synced with Google. I've just understood this process as a way to initiate the social graph. You can just not give Signal access and start from scratch, but I don't think that accomplishes much.
Also, as far as I'm aware, Signal doesn't actually know your phone number.
The thing is, some percentage of your contacts will accidentally or knowingly grant permission for their contacts to go to Google. So by linking to that infrastructure Signal is making this problem worse, whether or not they actually facilitate the spying themselves.
I assume you're an FBI agent trying to encourage people to install your real cooler encrypted app that's not on the store and only available via sideloading.
Heh, nice one. Not that it's my area, but in case the above was not decodable as sarcasm to other readers, following the evidence-based / defense-in-depth strategies I'd personally recommend not using phones at all (far too little control in general) and instead recommend seeking out auditable (open source) software on actual machines you have a hope to control for secure communications. It's a deep rabbit hole with diminishing returns, though.
It's definitely tin-foil-hat level. Obviously if you're a spy you're gonna have to have next level stuff, most of us aren't Jason Bourne, even we'd like to think we are.
There are a lot of bad actors in the security space. DDG, for example. Companies like perimeter 81 I don't trust based solely on the fact that Israel regularly and frequently acts nefariously. Bitlocker replaces good drive encryption you control with something that can be unlocked by authorities. Plenty of PRISM compromised companies offer security...
sms and email are insecure-by-default protocols. Gmail/imessage extend them which necessarily will create vendor-lock in when the extension relies on some centralized service, the extensions are private, and the implementations are closed source.
Matrix fixes this, but only in the sense that they replace the whole protocol without reverse compatibility.
This comment is especially true for the majority of the VPN companies plaguing YouTube ads/sponsorships right now. It's interesting they've all pivoted more towards "get netflix content from any country" than security, and also interesting that none of the streaming services have gone after them for doing so.
Most of the people in charge, only care about what state the "bad"/"good" actors are from, so preferably, "our guys" should be able to do everything, and "theirs" nothing.
Bunnie Huang is working on Betrusted [1], a communications device that is designed to be secure from state actors. The first step is Precursor (about: [2], purchase:[3]) the hardware and OS that will be the platform for the communications device.
It's designed to be secure even though it communicates via insecure wifi, for instance via tethering or at home. The CPU and most peripherals are in an FPGA with an auditable bitstream to program the device to ensure there are no back doors. Hardware and software are all open source. It has anti-tamper capability.
It's not rigorously provable, but to a large extent a "backdoored FPGA" is complete nonsense and not even worth considering.
The manufacturer/adversary knows nothing about your core design or where you'll place logic. Synthesis tools literally randomize routing and placement on each run as a natural consequence of routing being strongly NP. Further, once you add in the fact that FPGAs are often fairly high volume goods since the same chip is sold to thousands of different companies, it makes even less sense since now you have to have a backdoor that activates only on specific random designs but not any other design in regular industry use since an activation would lead to incorrect circuit behavior there. You'd also need this behavior to not show up under automated verification (you're running a verification suite against your chips, right??) which is nearing on science fiction. While, I guess you could do something like this, it'd be wildly impractical in every sense of the word.
FPGAs just have a much lower essential complexity.
Adding one undocumented latch is enough to undermine an ASIC CPU. To do that to an FPGA, you'd have to know where the layout engine is putting the circuit you intend to pwn, and good luck with that staying still under any revision.
If this did become a problem, a technique analogous to memory randomization could be employed to make any given kernel unique from the hardware's perspective.
You can’t of course know, but modifying the mask of a modern chip (millions of dollars by itself), slipping those mask(s) (you need many, one per layer of material) into production to target a subset of devices, in a way that lets you inject faults and lets you own the design the FPGA is emulating, is nuclear power level. And would imagine they would not risk it very often if at all due to the fallout it could cause.
A microcontroller on 130nm? Different story probably. Still crazy hard
I want deniability. After watching the videos from Ukraine of Russians pulling out citizens from cars forcing them to unlock their phone with guns to their heads -- I want a way to hand someone a phone, unlock it, and STILL be protected. I want my private things in a volume with deniability. Trucrypt was close.
I would pay a good premium for an iPhone with a distress code that unlocks the phone into an environment with some fake but plausible contents. Bonus points if it optionally wipes the real user partition upon entering the code.
That sort of exists, but only sort of. If you press the lock button on the side of the iPhone five consecutive times, it will then require your passcode to unlock (hopefully a high entropy passphrase), and will disable biometric authentication until unlocked with a passcode. You can set the phone to wipe after 10 failed attempts to unlock.
You can also say "Hey Siri, whose phone is this?" and your phone will lock down the same way as described above.
Of course, this doesn't protect from the $5 wrench attack, but plausible deniability only goes so far as well in a targeted attack. At least, depending on your local laws, law enforcement may not be able to compel you to provide your passphrase, but they can easily force you to use your biometric data, so this protects against that.
>>The dream I have is someone making a phone that is purpose-built to be secure against state actors
I just don't see how anyone could build such a thing. State level actors have the tools necessary to force you or your company to build in any backdoor they want, and prevent you from ever talking about it to anyone. US certainly does, and could just force apple to add a backdoor to this lockdown mode and apple could never even hint at its existence under legal threat.
Not just the US, so do the EU, any five eyes country, China, Korea, Taiwan. The US doesn't have a hegemony on backdoors so lets always remember that and not exclude others or act like it's an island of corruption in a world of benevolent state actors.
I don't think Korea or Australia have the power to force Apple to build backdoors into their products. Maybe they'd get to use the US one if they asked nicely.
Looking forwards to when Apple manufactures all iPhones in Sweden. Or did you mean the US, which remains stubbornly overseas and scary to the majority of the world’s population?
I don't recall getting a vote. Do you even know of a single device made in a relatively "benevolent" state actor country? I would love to know. I would love it if there was a provably secure device manufactured in some remote Pacific island that has never projected itself as a malevolent international threat like 100% of the first world countries have.
Realistically you cannot win against a resourceful adversary every time. But merely painting the situation through the lens of premature surrender is also a disservice.
It will be interesting to see what third-party researchers discover about these new protections. Might remember something about Apple rewriting format parsers for iMessage in memory-safe language with sandboxing as Blastdoor and it was discovered there was still plenty of attack-surface in the unprotected parsers.
It might just be better to not rely on a phone, rather than rely on something achieving perfect security against the most malicious and capable of actors.
If I was really concerned about targeted cyber attacks against me, I think that I would exclusively use computers that I would buy from random people on Craigslist, take the hard drives out and only boot with live CDs using ram disks, and only connect via random public Wi-Fi locations.
If I was really concerned about targeted cyber attacks against me, I think that I would exclusively use computers that I would buy from random people on Craigslist, take the hard drives out and only boot with live CDs using ram disks, and only connect via random public Wi-Fi locations.
Excellent precautions if you live and work in average middle-class suburbia and never go anywhere or do anything dangerous, controversial, or politically unpopular.
Lockdown Mode is not for you. It's for other people with different lives.
My point is lockdown mode won't be good enough. Which is why there is still a big bounty for it. And those wouldn't be excellent precautions if you weren't doing anything dangerous, because they would be a huge burden over just operating normally above board.
How exactly does this method stop working in cities? You could have provided some content instead of a weirdly vitriolic dismissal.
The parent was simply explaining that lockdown is not intended for a person who buys computers from Craigslist in order to enforce security.
Your mitigation is not a mitigation against being singly targeted. There are so many attack vectors in a computer outside of the boot disk. The computers sold on Craigslist should not be considered secure, since there is no level of trust in the supply chain or the state of the hardware.
For ex: If you are being directly targeted, a nation-state can purchase the computers from your local Craigslist, rewrite their bios, and list them for you to purchase. Then flood Craigslist with 100 other compromised machines.
I was explaining why your use case of purchasing computers from craigslist
does not secure against nation-state targeted attacks. Now you are changing the conversation and saying there are other ways to attack. Of course there are many other attack vectors. I mentioned that, however the conversation was about the true level of security provided by your mitigation.
I'm not changing the conversation, I'm pointing out the simple, currently-used-against-dissident attacks that are not possible if there isn't a clear connection between dissident and device. It certainly provides pretty good protection compared to having an always connected device with a unique ID carried on you at all times. Security is oftentimes about making reasonable tradeoffs based on your risk levels.
And I think you may be overestimating even the resources and capabilities of nations.
Let's say you lived in Philadelphia. You could drive down to Baltimore or up to NYC in 90 minutes. Within that range, there are literally over 10,000 individuals selling 1 or more laptops on craigslist and other sites that I did a cursory search over. And that's not even counting all of the small mom and pop shops that are selling laptops, as well as the big box stores.
How should the adversary state figure out which of those people you're going to purchase from? Should they purchase literally every laptop in the region? Okay then...what about when people start selling more laptops they had in storage because the market is red hot?
What do they even do when they have the laptops? Do they have exploits for every BIOS for every type of laptop for the past 15 years? How do they sell the laptop to me? Do they have their agents sell them? Do they have hundreds of agents who are deep undercover in America, who could lure me in?
I just don't see "buy every laptop in a region, exploit it, and resell it, hope your target picks one up" as a viable strategy, even for the wealthiest of nations, assuming you need to do it discreetly.
This is a fantasy that could only from someone who doesn't actually need it. The people who actually need Lockdown Mode-- dissidents, organizers, journalists, etc.-- also actually need to communicate with normal people, and that means having a phone. If you're so unimportant that you can get away with your proposed computing scheme, you're not going to be the recipient of targeted cyber-attacks.
Well, I don't need it, but the people who do need it usually don't have much of a clue about infosec or cyber security.
What means of communication are available to you via a phone but not via an internet connected computer?
There isn't even anything intrinsically wrong with a cell phone, other than the fact that it encourages you to carry it everywhere and merge all communications with everyone onto a single device that is default connected to the internet.
The dream I have is that they do not sack us with taxes that later on they use to violate our rights.
First thing is to remove a lot of the economic power and legislative power states have and hardening security in devices is also good news. But the problem is also that they have so much money and power that they can misspend money to target people and violate their rights because yes.
The potential a phone like that would have if you explained people how states can and do put their nose into their lives is quite big IMHO. It is just that people have no idea of how much they can take from your info through a phone.
In general, I'm much more concerned with private actors than state actors. I'm aware of multiple ways in which companies use information to try to extract money from me, and they actively make my life worse in the attempt.
I have a much harder time thinking about how giving states access to my information has been harmful for me. I can think of potential harms, if the state started doing religious or ethnic persecution(not trying to diminish the chance of this, but not a problem today) so I'm aware of potential threats. But other than that... What exactly should I be worried about?
The problem is what you say: political and religious prosecution. Not today, but hey, who knows when, given a situation. So it is better they cannot have our data, right? I mean, that is the safe part of the fence.
I am concerned about any actor, as well, but take into account that a state has a huge amount of resources and if they are motivated enough can make your life worse than almost any private actor. It has happened in history, this is not something fictional.
The reach at which a private actor can harm you is much more limited in the general case IMHO.
The problem 90% of cases is the user himself. Advanced attacks such as spyware-for-hire with zero-days and stuff only affect a minority of users. For the fast majority, the vulnerabilities are much simpler: password reuse/carelessness, malware on other devices (laptop, etc) that also has access to their data, willingly sharing too much information, etc.
You don't need a special phone or hardened OS to defend against that, and users vulnerable to this will remain just as vulnerable regardless of how much hardening there is.
Most people couldn’t grasp the important ramifications even if you walked them through it from first principles. I’m not sure I can despite being very interested in information entropy my whole life.
A lot of people really don’t understand much at all about anything that they don’t constantly see and touch their whole lives. A lot of people truly just live in the moment constantly and use their higher order thinking for social navigation and sex.
I feel like the closest you can come to the dream of a phone that is secure against state actors today would be a google pixel phone running graphene os.
With this announcement, Apple are saying "we will protect you from state actors", which is a role usually performed by states. Apple is saying "we operate at the same level as nation states; we are a nation-state level entity operating in the "digital world": It's a flag-raise.
It's the first such flag-raise I've seen. Security researchers talk about protections from state actors all the time, and there are tools which support that... but this is the first public announcement, and tool, from a corporation with more spare, unrestricted capital than many countries. It comes at a time when multiple nation states are competing for energy and food security; and Apple are throwing up a flag for a security-security fight (or maybe data-security). This is not just handy tech, it's full-on cultural zeitgeist stuff. Amazing.
There's a bit of a journey from "protecting you against government hackers and spooks" to full-on sovereign states; and there's a lot of things that a country's government funds that Apple couldn't even begin to take on[0]. Physical security and military operations are a hell of a different field from that of locking down computers.
Furthermore this isn't the first of its kind; Google has been alerting high-risk Gmail users about state-sponsored hacking for about a decade now. Microsoft probably does something similar. Apple is comparatively late to the party on this. On the offensive side you have the zero-day vendors that broker exploits between hackers and the government.
A better explanation is that Apple isn't supplanting the US government. It's supplanting Halliburton. As more and more people and things go online, hacking and doxxing them is becoming more militarily valuable than just arresting someone or firing a missile. After all, physical attacks risk counterattacks and escalation, but Internet attacks are relatively cheap, not really treated as an attack by many sovereign states, and, most importantly, difficult to attribute.
[0] Call me when Apple black-bags Louis Rossman for illegally repairing MacBooks, or threatens literal nuclear war - like, with uranium bombs and radioactive fallout - on the EU for breaking the App Store business model.
Furthermore this isn't the first of its kind; Google has been alerting high-risk Gmail users about state-sponsored hacking for about a decade now. Microsoft probably does something similar.
It’s great that Google alerted Gmail users, but then what?
“We believe you may be a target of a state-sponsored attacker; have a nice day.”
Beyond just telling you, Apple is providing some tools to do something about it.
Google advanced protection mode has been available for a while.
The threat models are different because the companies provide different services (spear phishing defenses from the web services company, hardware defences from the hardware provider), but still.
I'm not saying it never happens, and I don't want to assume anything about your background, but I think most people who work in software would agree there's no need. Plenty of problems get in on their own.
yep if that were your goal it would be way more cost effective to get a zero day from just not trying that hard with security practices. Not having any security knowledge on the team. Not patching/upgrading dependencies with security bugs.
It doesn't make sense from numbers perspective, there's simply not that much potential for profit there. In general, the sale price of a zero-day or ten in some popular product is tiny compared to, for example, the marketing budget of that product.
That money is significant from the perspective of a particular employee (i.e. if they personally would get the money) or for a specialized consulting company, but it's a drop in the ocean for the large companies actually making the products. So we should expect some backdoors intentionally placed by rogue employees (either for financial motivation or at the behest of some government) but not knowingly placed by the organizations - unless in cooperation with their host government, not for financial reasons.
>Apple is saying "we operate at the same level as nation states; we are a nation-state level entity operating in the "digital world"
Making mountains out of molehills.
I'm pretty sure they are saying that they will "offer specialized additional protection to users who may be at risk of highly targeted cyberattacks from private companies developing state-sponsored mercenary spyware".
There is a looooong list of things which nation states can do which Apple cannot, some examples of that are in other comments in this thread.
>but this is the first public announcement, and tool, from a corporation with more spare, unrestricted capital than many countries.
Google & Microsoft have both had fairly long-standing tools and procedures (which were publicly announced) to both alert users and aid users against nation state attacks.
The NSO Group, whom Apple specifically cites as an opponent that inspired this work, is a private corporation. They sell to governments, but so does Apple.
The relationship between state and private industry has never been binary and has always had features like this. I don't think this is a "Jennifer Government" type scenario.
At the same time, if that state actor happens to be China, Apple will just give the government access to your iCloud data. Not all state actors are equally within Apple's striking range.
It is worth mentioning that things like National Security Letters exist in the US. It is also the US who made Apple back off of encrypting iCloud backups E2E.
I wish we were more willing to cite our own government(s) as the bad actors here, rather than pretending that we have to reach for China/Russia/North Korea to find the kind of behavior Apple is attempting to protect its users against here.
Not to mention the CLOUD (Clarifying Lawful Overseas Use of Data) Act, which was enacted following a case in 2014 where Microsoft refused to hand over emails stored in the EU (an Irish data centre, in that case) on foot of a domestic US warrant.
The CLOUD Act expressly brings data stored by US-based companies anywhere in the world under the purview of US warrants and subpoenas.
This has always been the law. Common law courts have been issuing court orders that require you to take actions in foreign countries, even in violation of foreign law, for as long as it's been a legal question. The CLOUD Act actually introduced some additional safeguards and allows judges to consider the seriousness of the foreign law violation and weigh it against the importance of the court getting access to the foreign-stored data.
You unfortunately need something like this because otherwise people will just hide documents, money, stolen property, etc. in foreign countries out of reach of US courts, even if they are US persons and corporations.
It isn't just pro-government. Imagine you are a criminal defendant and there is evidence proving your innocence in a foreign server controlled by an American person or company. This rule makes sure you can legally compel that entity to go get the data, the laws of that other country be damned, so you can present your defense.
While extra-territoriality is not a new concept, it’s absolutely false to say that the CLOUD Act didn’t grant sweeping new powers to US courts. That’s a truly absurd claim that makes me question whether you’re commenting in good faith?
It was passed because in the Microsoft v. US case, the Supreme Court was expected to affirm the long-standing law on this: that in response to a U.S. court order, Microsoft had to hand over user data from Irish servers, Irish law be damned.
Such a blunt rule was considered a little too harsh, and a potential source of international problems, so Congress passed a law softening the rule and allowing judges more discretion in considering the burdens of complying with the order. The law had the effect of making the Supreme Court case moot.
Sorry that the truth is more nuanced than you’d like it to be.
There is nuance, but in the opposite direction. Microsoft did not adhere to the original court order, and fought it to the supreme court, where it was undecided when the CLOUD Act came into force and a new warrant was issued for the data held in Ireland.
It is unambiguously an expansion of Government powers. You're the first and only person I've ever come across who has argued the opposite. It's such a ridiculous thing to write that I am wondering if you're trolling me?
>There is nuance, but in the opposite direction. Microsoft did not adhere to the original court order, and fought it to the supreme court, where it was undecided when the CLOUD Act came into force and a new warrant was issued for the data held in Ireland.
What part of this do you think is incompatible with the fact that almost everyone expected Microsoft to lose the case?
And in fact, Microsoft, Apple, and Google lobbied for the CLOUD Act.
So maybe instead of accusing people of bad faith, you should have a little humility and open-mindedness to improving your understanding of the world. Believe it or not, techie discussion forums and Wired are not reliable sources of legal information, so that would explain why you're so misinformed.
It's part of the reason that Privacy Shield collapsed and why the US isn't considered to offer adequate protection to EU residents. It's currently being both litigated (as more and more EU country data protection agencies make individual rulings that specific instances of transfers of personal data to US companies are unlawful) and the subject of intense political negotiation between the EU and US.
Most companies affected are currently awaiting the results of these processes, because following the current precedent to it's logical conclusion, it appears unlawful to transfer any personal data of an EU resident to a US-based company (even if that data remains physically in the EU or another adequate country). That would obviously have catastrophic consequences for the current status quo, so it's hard to believe that a compromise won't be found to avoid it.
However, it's also hard to see a compromise unless the United States exempts EU data subjects from the CLOUD Act, which seem unlikely. Hard to know where it'll go.
> However, it's also hard to see a compromise unless the United States exempts EU data subjects from the CLOUD Act, which seem unlikely. Hard to know where it'll go.
Bureaucrats are capable of breathtaking sophistry when it makes their jobs easier. If red was illegal but convenient they’d make a policy that red was actually green and argue it was until they were blue in the face.
It's not entirely clear yet who wins, but the current issues with Google Analytics in the EU seem to be partially related. Some countries have come to the conclusion that GA can't be legal if Google US has access to the data.
Nothing stops Apple from offering e2ee backups, and in fact they do this for certain data backed up to iCloud (health data for example.)
But your iMessage data...well there, your ass is hanging out in the breeze. In fact, I'm not sure it's possible to log into an iPhone with your Apple ID and not have an iCloud backup immediately fire off, which means your private encryption keys hit iCloud and stay there until it is purged according to their data retention policies. And we have no idea what those policies actually are; those keys made end up stored forever.
> Nothing stops Apple from offering e2ee backups, and in fact they do this for certain data backed up to iCloud (health data for example.)
Almost all users can't handle this; to support people, you need to be able to recover their account when they've lost every single password and proof of identity they possibly can. It's not a backup if you can't restore it.
> In fact, I'm not sure it's possible to log into an iPhone with your Apple ID and not have an iCloud backup immediately fire off
You are correct there’s a bit of dark pattern going on here, but it is possible (to the extent the code does what it says of course). To be extra sure I have a custom lockdown MDM profile to disallow iCloud backups, as well as a number of other nefarious things like analytics, and whenever I get a new device, I first DFU restore it to the latest iOS image to ensure software (post bootrom) isn’t tampered with, then activate and install the MDM profile via a Mac and only then I interact with the device and go through setup.
The only persistent connection Apple has that I can think of to implement such a concept is for push notifications. Which would be a massive security hole if a HTTP response to that daemon was capable of bypassing the lock screen, secure enclave etc.
And the logical question is if they had such a system why would they bother triggering an iCloud Backup when they could ask the device to specifically hand over certain information e.g. Messages. Which at least could be done quietly over Cellular.
> Which would be a massive security hole if a HTTP response to that daemon was capable of bypassing the lock screen, secure enclave etc.
I mean, Apple has killswitches for every iPhone they ship. I wouldn't be the least bit surprised if that suite of tools also included settings management (MacOS has such a thing built-in, fwiw).
Yes, this is Apple protecting you against extralegal state actor threats. There's not really much Apple can do to protect you against the laws of your own country.
Because they are complying with Chinese laws regarding data localization in the country and have been known to work with China (recently YMTC chip deal, previously in a major unreported deal that was unearthed a little while ago) in order to get market access.
"Apple is moving some of the personal data of Chinese customers to a data center in Guiyang that is owned and operated by the Chinese government. State employees physically manage the facility and servers and have direct access to the data stored there; Apple has already abandoned encryption in China due to state limitations that render it ineffective."
I really dislike that there is so much social control :( In theory is to protect you. In practice it can and is misused in so many ways that it should not be even allowed without a judge authorization.
You're kind of missing the point. The Chinese government has unlimited social control. Even if there was some sort of written law in China requiring judicial oversight, that wouldn't limit social control because the judiciary is just a rubber stamp.
Apple has abandoned encryption for everyone in iCloud. You cannot encrypt anything except a limited subset of your device's data (Apple Health data, mostly.)
That may be true, but Reuters reported that Apple had a plan for it (which means they felt it was workable) and dropped it due to pressure from FBI/DOJ.
Also, there are many users who would benefit from e2ee iCloud backups who are not targets of NSO Group-type attacks, so I don't think it makes sense to make it only available in "Lockdown Mode".
I was all prepared to answer this with "so Reuters reporting something makes it true?", only to discover that, in fact, Reuters reported no such thing.
Reuters makes two claims:
1) The FBI talked to Apple (duh)
2) An unannounced plan to implement fully E2EE backups was no longer discussed with the FBI at their next meeting
Both of those things might be true! Reuters isn't known for just making stuff like this up, like, say Bloomberg, but the article specifically says:
"When Apple spoke privately to the FBI about its work on phone security the following year, the end-to-end encryption plan had been dropped, according to the six sources. Reuters could not determine why exactly Apple dropped the plan."
So we've got an unannounced product, which the FBI didn't like, which Apple stopped talking to the FBI about (according to some leakers at the FBI).
This does not add up to "Apple dropped plans due to pressure from [the] FBI/DOJ". It adds up to "secretive company discusses plans with secretive agency, and some stuff about that conversation leaked".
I would suggest that if you're doing anything illegal in the country you're staying in, turn off icloud sync at the least, and best policy is don't use an iphone but use an android with an open source operating system like graphene OS
> In Apple's defense E2E encryption also makes it a lot easier to get locked out of your photos and device backups.
This is likely the real reason E2E hasn't been done yet. I would wager Apple deals with orders of magnitude more people who are locked out of their phones than the number impacted by the lack of E2E backups. Trusted recovery contact added in the last iOS version is a step in a direction of providing some way to implement E2E, and still give people a way to recover.
Definitely very interesting. I know Google has their “Advanced Protection Program”[0] with a Titan security key which is similar. It is interesting considering that Google’s protections target the user as the weak link, as your data lives on their hardware; while Apple is obviously targeting both the user and the hardware they have. I’m curiuos what security researchers will think of this, if it’s more theater or if it is actually a innovative attempt at giving advanced privacy to people who need it. Despite their past stumbles (e.g., CSAM), it seems like Apple is genuinely in the privacy fight, even if it is just for their bottom line.
Microsoft has a “Democracy Forward” team (previously called “Defending Democracy”) that aims to protect government officials and systems from adversarial state actors. It’s been ongoing for a few years now.
Given their track record, I'd trust Microsoft approximately 0% to secure my critical/sensitive systems. The funny thing is that the U.S. government does, in fact, trust them.
I think you're letting the reality distortion field get to your head. They're creating a safe mode for iPhones because a lot of features complex/intricate enough that they are perennial sources of vulnerabilities (and/or UX flaws that lead users to make unsafe decisions).
That is, they're turning features off for security. Something every IT department has been doing for decades. Windows supports this. Mac OS supports this. In fact, iOS was kind of notable in being so unconfigurable. The settings available in their MDM implementation were pitiful and didn't let admins disable many of these features.
> Apple is saying "we operate at the same level as nation states; we are a nation-state level entity operating in the "digital world"
Apple's profits are bigger than my country's (Slovenia) whole GDP. You bet your butt they're a state level actor in the digital world. They have more resources than many countries.
If Apple was a country, their $365bn in revenue would make them the 43rd richest country in the world right after Hong Kong.
This also points out how the increasing costs of technology and economies of scale mean that small countries like Slovenia are no longer viable on their own. The only way they will be able to survive the next few decades and avoid turning into failed states is to surrender most of their sovereignty to larger regional alliances.
My understanding is that the "social contract" inside many of these large companies is quite cushy. Especially in USA where being employed comes with services traditionally provided by the state like health care, child care, free or subsidized food, retirement benefits, etc.
I think Apple's announcement (and as I've learned from this thread MS's and Google's similar programmes) represent a significant step-change. A single defense attorney performs this action on a case-by-case basis, and they earn "single human" levels of income from it. They (to some degree) use that money to make themselves comfortable and perhaps share it with charities and make investments. All the defense attorney's in the world combined still, probably, have access to a fraction of Apple's budget, and a fraction of Apple's audience. Defence attorneys don't always win all their cases.
Apple have the kind of money that makes 1000s of attorneys envious. Apple use that money to make infrastructure and client devices and then sell/share that technology with billions of people. Most of Apple customers buy their phones on loan agreements over some contract time-frame. It's "cheap", and the protections are automated.
I'm tired and getting rambly about this now, but I intuitively feel like the combination of state-level power (albeit exercised with a very narrow focus), and the way so many people live their (digital) lives interacting with a "noosphere" that crosses international borders are facets of a complex phenomenon we have not witnessed before, and which will merge with other related facets and then emerge as something really different. I accept I'm getting very fuzzy in my thinking here. I'm leaning into my inner sci-fi author (who's not come out for 30years and wasn't too talented when it did).
Google has been dealing with nation state actors targeting its users (Gmail specifically) for a decade now. They have Advanced Protection program. We actually regularly used to hear about how human rights activists were targeted in spear phishing campaigns and then arrested.
agreed, the rise of the corporation as the most powerful institution (above the nation-state) in this new budding global civilization is a long time coming.
on the other hand, this is how democracy dies. what structures (systems) exist to prevent apple (and other comparable corporations) from being an oppresive force against human persons? moreover, what incentives do they have?
I can think of a few, at least applicable in the USA:
Apple doesn't have a military or police force with jurisdiction over me. They don't have the legal power to arrest me or throw me into prisons, which they also don't have. I don't have to pay taxes to Apple. I don't have to do business with them or interact with them in any way if I don't want to. I don't need Apple's permission to do anything unrelated to their product lines.
Same is true for any megacorporation. It's a big stretch to say they are even remotely as powerful as nation-states, let alone more powerful.
Yes, the state's monopoly on force is to me what truly differentiates them into a different category of power than a corporation. Also international recognition for nation states and being able to have treaties and the like, but really its the monopoly on use of force. That said, I think the rise of charter cities (think of an SEZ on steroids run by a private corporation) will blur the lines further, although most proposals I've seen for charter cities leave policing to the locality they're residing in.
Many nation states don't have control over interest rates (because their central banks are run independently of the government) or even the ability to print money, if they have adopted another currency.[0]
> Mandatory taxes
States typically tax transactions which happen on their territory (e.g. wages and sales), and in the case of Apple, their devices are their territory, like feudally controlled tracts of land in cyberspace. Taking a cut of all app sales and in-app purchases seems very much like a tax under this analogy.
>Many nation states don't have control over interest rates
And many others do. The State can abdicate such power and it usually does in stable economies where markets can self regulate.
Given a big enough crisis, however, and the State will usually take that power back.
>or even the ability to print money, if they have adopted another currency.
Usually in cases of near total State bankruptcy
>Taking a cut of all app sales and in-app purchases seems very much like a tax under this analogy.
> I don't have to do business with them or interact with them in any way if I don't want to. I don't need Apple's permission to do anything unrelated to their product lines... Same is true for any megacorporation
Nope. You can avoid buying an iphone, but you cannot escape Google. I'm often forced to "do business" with google. I've seen several government websites that require code hosted on Google's servers. I need Google's permission to do all kinds of things unrelated to their service (reCAPTCHA) and google will track everywhere you go online even if you never use any of their services. Facebook also doesn't give you any option. They'll create a profile for you and start collecting data on you even if you've never created an account. You could argue that you pay these companies taxes in the form of your data rather than money, or that the fees they charge developers drive up consumer prices (acting as a tax on the purchases), and I suspect that should Apple/Google pay become more commonplace they will start charging a fee (tax) for that as well. Nothing stops them from doing it.
Some corporations even have their own literal armies (Blackwater/Xe/Academi), but others don't bother because they have the ability to command the police and military wherever they are. The RIAA have their own "swat" team. They participate directly in raids breaking down doors and handling evidence.
Companies like Apple and Google are far more invasive than police watching everything you do, listening to everything you say, recording every person you're in contact with. They censor and ban with impunity. If they really wanted to, they could plant data on your devices that would get you arrested and thrown in prison in any country around the globe.
corporations might not yet be as powerful as a nation state, but they're a lot closer than you give them credit for, and they likely have more direct influence on your day to day life and what happens to you.
No, they're nowhere close to being a nation state. Those spheres of power are nothing compared to something like the British East India Company, which had a currency, an army, and forcefully controlled almost 2 million sq. km. of Asia.
Captchas are definitely worthy of criticism, but they are not remotely on the same level as forcefully controlling the land under someone's feet.
The Knights Templar were a religious organisation, but also a quasi-banking institution in Europe; they took and protected deposits of gold, and issued 'cheques' allowing, for example, travellers to deposit gold in London and spend the money in Southern Europe. They were dissolved because they were beginning to rival the Papacy and nations in power due to their immense wealth.
Also, few know this, but many African slaves who were victims of the slave trade became slaves due to debt-slavery (though this didn't involve formal banks). I've seen estimates of up to 25% of slaves back then having been debt-slaves.
Yes! I had heard a bit about the Knights Templar, I guess I would have categorized them as religious first, financial/governance functions second. But also the Order of Malta had quite a lot of power, to the point I believe that it is still recognized by the UN!
I hadn't realized that about African slaves; debt for what?
the ones that only service other banks hence only people working in higher level banking are likely to have heard about. e.g. the bank for international settlements
I only found out about this bank because the former president of the mexican central bank -- Mr. Carstens, left the central banking gig to go to that bank.
From reading their Wikipedia quickly sounds like BIS has a similar function to say the IMF when it comes to financial system stability. I do agree these sorts of organizations exert huge amounts of influence, especially for smaller countries that are dependent on loans and outside financing, but I'm not sure I agree they are more powerful than a nation itself. A nation can (theoretically) decide to opt out from these systems and operate independently, or can play different parties funded by nations (because in the end they all are working for someone's agenda) off of one another as many countries did during the cold war between the U.S. and Soviet Union. But if a nation reneges on its debt, the BIS, IMF, etc. isn't going to invade your country--one of it's creditor nations might, but not them.
The BIS is just a counterparty to facilitate payments between nations. It doesn't exert influence in international affairs (except really via the BCBS [1] which sets the Basel capital accords defining how much capital banks have to hold and therefore does have a lot of influence behind the scenes on how banks operate anyway). When the US says it's going to give $100m in aid to some country or one country pays back a debt to another country, there needs to be someone to process the payment, and that someone is the BIS.
Source: friend used to work in the BIS and I've also been involved in banking off and on for a long time, including dealing with various international banking regulators.
Some fun BIS facts:
1) They process payments via regular SWIFT[2] messages. So the $100m in aid comes as a message just the same as if you transfer $5 from one bank account to another. It has an IBAN number with a regular bank account, so if you changed that to your own account details and the message was processed suddenly $100m would appear in your checking account instead of going fund an aid programme for some government in Africa or whatnot.
2) The number of payments they process is very low (>100 per day max and usually in the low tens of messages) so every payment message is checked by hand by several independent people as well as having automated checks. Partly to avoid the risk of funds getting sent to the wrong places etc.
3) My friend worked there in the 90s and said that even back then they had extremely strong security with multifactor biometrics on every entry to the premises. You got in via an entrance where you had to step into a cylander which would only unlock after it had taken multiple photos including an iris scan
Based on their history of using their control over the App Store to "protect people" from such harmful content as content about how smartphones are made in sweatshops and tools (such as VPN clients, but also for a long time cryptocurrency wallets) that allow people to bypass restrictions put in place by these nation states that Apple works with, I'd claim these incentives are pretty shit :(.
Apple is a public corporation and votes on its corporate direction are freely available on the open market for anyone to purchase. Based on my share ownership Apple is much more subject to my whims than my actual elected politicians are on a % basis.
I was dripping with disdain and sarcasm as I clicked "reply" but I actually want to engage you and have you seriously consider the history of oil and gas exploration and extraction.
This may, in fact, be a first for a US tech company ... but not in any way whatsoever a first for a business interest or corporation, etc.
This is also a very tame, roundabout and implied flag-raise - as opposed to "... summary execution, crimes against humanity, torture, inhumane treatment and arbitrary arrest and detention ...":
Good points. I had not thought about oil at all. I’ve become aware through light skimming that wars, coups and similar incursions have occurred around other resources.
This feeds into the point I was aiming at. The tech megacorps now have tooling to protect a narrow aspect of their customers lives from state incursion, regardless of which country that customer live in. I’ve not read the link you shared yet, so I don’t know the angle it takes or the angle you want to dig into my ideas from.
Counterpoint - the EU has been passing laws that force apple to be more fair in their markets, and this "we're protecting you from bad guys" stuff is apple trying to figure out deniable methods to protest or sue against the EU passing laws to restrict apple's ability to lock other developers out.
Throw together a basic set of options that should have been available long ago, now apple is protecting you, don't strip apple of the ability to protect you, etc.
> from a corporation with more spare, unrestricted capital than many countries
... than most countries. There are only 7 countries with a higher GDP than Apple's market cap.
I have been concerned for some time about these mega corporations being as powerful if not more powerful than governments. They wield tremendous economic and political power. Corporations have very little allegiance to countries and have little to check them. It is a major concern of mine. Democracy in the U.S. is already being sold to the highest bidders.
These corporations are feudal lords but much, much more powerful because there is not a single person who can be brought down. Corporations are a collective who are treated as people when it's convenient and as something else when it's not.
It's bothersome to me, because these corporations are tax sinks. They get absolutely massive tax breaks on everything they do and pay as little as possible income taxes, comparatively speaking, all the while keeping billions offshore.
Billionaires and mega-corporations are national security threats to the countries that house them.
Pick whatever comparison you'd like, and the rest of my comment still stands. What you said may be true, although I'm not sure it's as simple as you state. But debating the semantics of the exact comparison used isn't really important to the sentiment I espoused.
Apple is following the lead of Microsoft in this regard. Microsoft has been acting as an international cyber defense agency for a few years. On the effectiveness of Ukraine's cyber defense: "Microsoft in particular has been hard at work" 21:45
After the Snowden leaks that showed even in-country citizen-to-citizen communication was being scooped up by the NSA without a warrant through fiber taps (if I remember that right) when Google replicated the data to out-of-country data centers, Google announced encryption of those links:
Google encrypts data amid backlash against NSA spying
What they are doing is giving users an easy-to-use option to sacrifice part of the default user experience to enhance security by disabling features that are common vectors (which happen to be used by, as they phrase it multiple times in the announcement, "private companies developing state-sponsored mercenary spyware").
IMHO, whatever the reason why they are doing it, it's a good addition to their value proposition; but I don't think it's the same as what appears to be your understanding ("they will protect users from state actors"), at all.
A nation state has more than one way of extracting information from enemies of said state. There's the civilized way we now call hacking, and then there's the traditional way, which may or may not involve technology.
I dislike big tech as much as the next hacker, but this seems like quite a leap. Protecting from nation-state actors digitally can be a job for digital powerhouses. In this case, the hackers are just very determined hackers with a lot of resources. Apple is a very motivated company with a lot of resources. Slightly to your point though, they have higher income than 96% of the countries on the planet. So they have the wealth to establish an Appletopia.
> Apple is saying "we operate at the same level as nation states; we are a nation-state level entity operating in the "digital world": It's a flag-raise
Maybe. But these security “features” feel like things that should have been there from the beginning. Windows 11 has already had a much wider and deeper array of security options. Sure, it’s not mobile, but many of those security options would be unlikely to be needed against unsophisticated attacks.
Flag-raise or marketing gimmick? You be the judge I guess.
This feels like an argument the government would make against strong encryption like in the case a few years ago where the government tried to force Apple to unlock an iPhone and Apple refused claiming it wasn't possible.
Apple are basically saying that they're going to do their best in terms of security measures to thwart even state actors, which is only as much of a nation-state level thing as "military grade encryption" is a thing only applicable to militaries.
You haven't been paying attention. Many tech companies have been protecting accounts from state attackers for many years, and explicitly calling out state sponsored attacks. Google introduced state-sponsored attack warnings in 2012 [1] and the Advanced Protection program explicitly protects from state sponsored attacks [2].
Many tech companies have been protecting accounts from state attackers for many years…
How many people have Microsoft and Google actually helped?
Incase you didn’t notice, Apple is in the process of giving a few hundred million iPhone owners--every iPhone since the 2017 iPhone 8--protection from state-level actors, for free, in the next operating system update due this fall.
It totally dwarfs anything that any other company has done in this area. So there’s that.
Google sent more than 50,000 state sponsored attack warnings in 2021. And those warnings started in 2012. So a lot of people have been helped. Meanwhile Apple didn't start doing similar warnings until less than a year ago.
> Apple is in the process of giving a few hundred million iPhone owners
Um, no? Lockdown mode is explicitly for "very few users". There's no way a hundred million iPhone users would benefit. Google's Advanced Protection offers protection from state-level actors to anyone with a Google account, so if you want to count by the number of people offered optional protection, Google wins by a landslide.
> for free
Haha, no, you have to buy an iPhone from Apple first. Google offers protection to anyone actually for free. All you need is a free Google account and a security key which doesn't have to be purchased from Google.
The point is the several hundreds of millions of existing Apple customers who own an iPhone 8 or newer are going to get Lockdown Mode in the next version of iOS for those "who may be at risk of highly targeted cyberattacks from private companies developing state-sponsored mercenary spyware" at no cost.
While it's true that very few iPhone users should ever need to activate this feature for the described use case, Apple has already indicated there will be more features added in the future where this could change.
There are likely additional use cases where an iPhone user may want to activate Lockdown Mode, such as traveling to an authoritarian country.
This article makes the argument that Lockdown Mode could benefit iPhone users who never activate it. [1]
"state actors" doesn't mean the US government in its full force or any other government Apple is in bed with to make money (like China).
It means in the best case shady agencies, foreign services, small governments, and in the likelier case just unhinged people with some access to state facilities (tax employees, unofficial police investigations, lawyers...)
They don't "operate at the same level as nation states", protecting against state actors isn't the only thing in that level, unless you mean cyber-security only. Abstracting this to anything "nation-state level entity" is the crux of your argument.
“Flag-raise” seems a bit hyperbolic but at any rate I think the BSA asserted such reach and power, long ago. Both have to act within the oversight of actual nation states.
Beyond that, a secure phone is necessary but not sufficient to defend oneself against a nation state.
I don't know if you've been paying attention to Apple's strategy over the last year, but it's basically been "granting user privacy also happens to grant us an advertising/data monopoly"
I don't think the aim here is to block at state actors but to basically continue to close all security holes that can be exploited by any other company and continually proving to users that Apple cares about privacy.
The things is I really like Apple even more now since they have realize that my privacy interests can be tightly aligned with their own economic interests. I never trust companies to be good or look out for my interest even when I pay them to, but when my privacy ultimately means they gain a very strong competitive edge the I'm much more trusting.
Apple has realized they can become to privacy what Google has been to ubiquitous search, and doing so can reap even larger and more secure rewards.
They started with a walled garden and now extending it to fortress surrounding the garden.
Other than running ads inside the App Store, do you have any knowledge or evidence of Apple collecting personal information for advertising or any other use?
Apparently that protection does not include protection from the US government.
iMessage offers excellent privacy of message content, but no 'pen register' protection.
Phone device security is very strong, but it's made largely moot if you turn on iCloud backups (which is the default behavior if you provide an Apple ID. I'm not sure there's even a way to stop the initial backup from happening?)
Apple reportedly doesn't offer e2ee on iCloud, or even encrypted device backups, out of compromise with the federal government...specifically the FBI, CIA, and NSA.
Why might people care about this? Criminalizing abortion and miscarriages...and what looks like at the very least a re-recognizing, and possibly criminalization, of LGBTQ relationships.
When Apple says "state actor threats" they're not talking about future-state theoretical breaches of domestic privacy by your own government. Apple is always going to follow the law. They're talking about the types of situations where data from people's phones is used to commit international criminal activity, espionage, assassinations, etc.
By offering users a more locked down option with clear tradeoffs, (a) users can make a choice between security and convenience, and (b) given user agency, negative press around hacks of not locked-down devices loses potency.
Meanwhile, the choice seems straightforward on most of these...
Lockdown Mode includes the following protections:
- Messages: Most message attachment types other than images are blocked. Some features, like link previews, are disabled.
GREAT!
- Web browsing: Certain complex web technologies, like just-in-time (JIT) JavaScript compilation, are disabled unless the user excludes a trusted site from Lockdown Mode.
GREAT!
- Apple services: Incoming invitations and service requests, including FaceTime calls, are blocked if the user has not previously sent the initiator a call or request.
GREAT!
- Wired connections with a computer or accessory are blocked when iPhone is locked.
GREAT! (Used to have to do this yourself with Configurator if you wanted to be hostile border-crossing proof.)
- Configuration profiles cannot be installed, and the device cannot enroll into mobile device management (MDM), while Lockdown Mode is turned on.
HMM ... there are hardening settings only available through Configurator or MDM profiles. Will those be defaulted on as well?
>> - Configuration profiles cannot be installed, and the device cannot enroll into mobile device management (MDM), while Lockdown Mode is turned on.
> HMM ... there are hardening settings only available through Configurator or MDM profiles. Will those be defaulted on as well?
Reading between the lines here - on lockdown mode, you can't install a profile, or enroll in MDM. What it doesn't say, is that you can't enable lockdown mode with a profile installed, or if enrolled in MDM.
I take this to mean, with lockdown turned on, I can't install profiles or enroll in MDM (but presumably could uninstall profiles or unenroll from MDM).
I also take it to mean that most of these hardening features will be able to be enabled by configuration proflies / MDM anyway. Lockdown mode being essentially "Apple being the MDM for individual who don't otherwise have an MDM."
>- Configuration profiles cannot be installed, and the device cannot enroll into mobile device management (MDM), while Lockdown Mode is turned on.
>HMM ... there are hardening settings only available through Configurator or MDM profiles. Will those be defaulted on as well?
Yes, that one leapt out at me as well as kind of an awkward one with more compromises, painting with a very broad brush. It's obvious that some of the very powerful config profiles/MDM capabilities could be used for a lot of mischief, but some of them are also exactly what I'd want to be running myself if I was at a lot of risk, and some are both. Ie., continuing to have one's own offline based CA with proper Name Constraints could be handy for a group of people who want to try to better secure and keep private their own internal network services from anything short of a government physical assault, but if an attacker can slip on a profile with an unlimited CA your goose is cooked.
Perhaps Apple simply doesn't have the capability for fine grained control of those capabilities yet, which wouldn't be surprising given their path up until now. I'll be interested to see if over time Apple leaves this mostly untouched or invests in seriously improving it. Like it'd be interesting if you could boot into a special mode ala DFU though requiring password and with graphics up and have a bunch of toggles for various capabilities that would then be enforced in normal usage. Analogous to the Recovery Mode on Macs.
Perhaps Apple simply doesn't have the capability for fine grained control of those capabilities yet, which wouldn't be surprising given their path up until now.
I have to believe they’re working on exposing some of this via MDM. Certain organizations may never want the JIT turned on, for example or allow attachments in iMessage.
I expect we’ll hear more about more capabilities this summer and fall.
Do you really trust your average IT department to make an informed decision about whether WebKit JIT is currently secure or not? I don't see Apple putting these in MDM Configuration Profiles. If they do, it will only be for Supervised Devices (i.e. devices owned by your employer, must be wiped to enroll).
I'm worried about that last point simply because I assume that most corporate CEOs would need MDM enabled to access their corporate email, which means they won't be able to use this feature despite being prime targets.
Still a really good feature for those that qualify, though.
I'm worried about that last point simply because I assume that most corporate CEOs would need MDM enabled to access their corporate email…
As long as the MDM profile is installed before using Lockdown Mode, they’ll be fine. They just can’t install an MDM profile once the phone is locked down, which makes sense.
Ideal approach would be to be able to manage all those features individually via MDM. This way corp admins would be able to lock down managed phones while bringing necessary configuration to access corp services.
Last year I wrote: "In the world I inhabit, I’m hoping that Ivan Krstić wakes up tomorrow and tells his bosses he wants to put NSO out of business. And I’m hoping that his bosses say 'great: here’s a blank check.' Maybe they’ll succeed and maybe they’ll fail, but I’ll bet they can at least make NSO’s life interesting." [1]
I hope Apple expands this quickly through minor updates to the OS rather than waiting for a next major release. This needs faster iteration than anything else.
Quoting what’s in the first release:
> At launch, Lockdown Mode includes the following protections:
> Messages: Most message attachment types other than images are blocked. Some features, like link previews, are disabled.
> Web browsing: Certain complex web technologies, like just-in-time (JIT) JavaScript compilation, are disabled unless the user excludes a trusted site from Lockdown Mode.
> Apple services: Incoming invitations and service requests, including FaceTime calls, are blocked if the user has not previously sent the initiator a call or request.
> Wired connections with a computer or accessory are blocked when iPhone is locked.
> Configuration profiles cannot be installed, and the device cannot enroll into mobile device management (MDM), while Lockdown Mode is turned on.
I’m not a target (I think, and hopefully don’t get to be one), but nevertheless I’d feel safer with this turned on (I very rarely use FaceTime, so not accepting it is not a big deal).
I’d also love more protections. Not allowing specific apps to connect to any network (WiFi included), Apple handling issue reports on apps with urgency (right now they seem to be ignored even when policy violations which are against the user’s interests are reported), etc.
Lots and lots of privacy stuff in the point releases. (And accessibility stuff, they’ve been on a tear there.) They’re still in a monolithic mindset when it comes to the “big” apps, but they’re iterating faster on these sorts of things as the release cycle goes along.
That…is seemingly a thing they should have done a long time ago…but it’s still smart, and I’m glad they’re doing it. Now they don’t have to rush the QA of a point release to vanquish yet another PDF parsing security threat.
> I’m not a target (I think, and hopefully don’t get to be one), but nevertheless I’d feel safer with this turned on (I very rarely use FaceTime, so not accepting it is not a big deal).
Good. We need people with nothing to hide to turn Lockdown Mode on, so that Lockdown Mode isn't a telltale signal that you have something to hide.
This is great but too big of a hammer for most use cases. What I really want is a per-application firewall.
For example, say I would like to install a photo editing application. It would need access to my photos. That is fine, so long as it is not allowed to connect to the Internet (or any other network). There is currently no way to ensure this.
> This is great but too big of a hammer for most use cases.
This is not in any way intended for most use-cases, it's very clearly intended for a single, specific, uncommon use-case. The press release says as much more than once.
I guess my point is that instead of making a special mode that is only useful for a minority of users, it would have been really nice to get a feature that everybody should be thinking about and using.
Different people who specialize in different aspects of security can be working on different things at the same time; and contrariwise, experts have comparative advantages and would be mostly wasting their time working outside their nich.
In other words: there's no "instead" here, any more than there's an "instead" between e.g. UI work and backend server work. Different people, different competencies, concurrent capacity.
Every time I have allocated labor on a software project, I was mostly playing a zero-sum game. I am surprised to learn that Apple does not have such problems.
Regardless, I was just lamenting that we don't (yet) have a feature that should be table stakes at this point.
Agreed. I wish iOS had a "network access" permissions just like Android does. (Though to avoid permission fatigue for the average user, perhaps make it something only users that care can deny)
That said, I think this is pretty unrelated to protecting yourself from nation state actors. Mercenary spyware (like NSO) doesn't use a legitimate app store app as their initial infection point. I can think of many reasons for this: difficulty getting target to install it, app store approvals, leaking their 0days, leaving more of a paper trail, and avoiding scrutiny in general, etc. I'd of course love this feature for my own data privacy of course.
It's not exposed in the UI, but if you really care, you can just create yourself a configuration profile that disables various per-app permissions (including network access, per-domain/per-IP/per-certificate) on a fairly fine-grained basis. MDM yourself.
> (Though to avoid permission fatigue for the average user, perhaps make it something only users that care can deny)
Yeah, I would not want to have to approve every app. What I would like is a machine readable description of the app's capabilities to include Internet access, just as is required for access to the microphone or photos. This would encourage app developers to advertise to users that they don't need such capability and encourage users to realize that privacy and Internet access are mutually exclusive.
There are many small apps I simply will not buy/install (e.g., apps for editing photos or contacts or calendars) because they cannot be trusted. Even if you trust the developer, the developers are often embedding third party analytics libraries that cannot be trusted.
I'd go a step further, and say per-application virtualization. Every single program running its own (ideally encrypted memory) namespace, with its own assigned memory, etc.
That's what the ios sandbox provides. Heck, the tools arm64 gives you to isolate VMs are awfully similar to the tools they give you to isolate processes. VM escapes aren't too different than sandbox escapes.
Encrypted memory isn't part of arm yet, I was holding out hope with armv9 "realms" but not so.
I use little snitch for this, but I agree, a big hammer, and likely more hoops for regular developers to jump through. Notarisation, signing, forced developer keys...
I use Little Snitch on macOS, but it is not available on iOS, so far as I know. Normal apps on iOS do not have enough visibility into the system for that.
Android exposes a soft VPN API that firewall apps can use to block network traffic for certain apps in certain scenarios (say, no Google Play updates when on mobile data) with apps like Netguard [1].
Does iOS not expose such functionality? Surely there's some kind of VPN API?
> Android exposes a soft VPN API that firewall apps can use to block network traffic for certain apps in certain scenarios (say, no Google Play updates when on mobile data) with apps like Netguard.
I worked on AOSP for longer than I care to admit. This is mostly an illusion. System apps (like Google Play) can pretty much do whatever the heck it is that they want to. NetGuard, sure, "firewalls" it... but it wouldn't even know if a system app bypassed its tunnel. For installed apps, NetGuard is golden (as long as NetGuard itself doesn't leak).
disclosure: I co-develop a FOSS NetGuard alternative (and yes, this alternative has similar limitations).
Interesting, and disappointing. Do you happen to know what mechanism is used to bypass the VPN configuration?
I'm using my VPN as a Pihole tunnel and I don't notice any extra logs or requests when I turn off the VPN, but I may just be lucky. I did purge a lot of preinstalled Facebook crap…
It isn't that System Apps actively bypass the VPN tunnel, but they can if they want to, on-demand [0]. That is, System Apps retain the ability to bind to any network interface. Whether they do so, is anyone's guess.
For installed apps, there's no such respite, iff one enables 'Block connections without VPN' (the VPN lockdown mode) on Android 10+ (but NetGuard doesn't support it). This means in the times when NetGuard crashes or restarts (which it does on network changes, for example, or even on screen-off/screen-on, from what I recall), there's a chance the traffic flows through underlying interfaces rather than the tunnel (because the tunnel simply doesn't exist in the interim).
Datura (ebpf based) on CalyxOS and AfWall+ on any rooted Android can block out everything it pleases, though.
I don't mean to downplay NetGuard, because the codebase has evolved in response to years of addressing flaky networks, flawed apps, buggy Android forks. Marcel, the lead developer, has put his life's work into it and gave it away for free. The app I co-develop is, in fact, inspired from his efforts.
I see, thank you for explaining! Good to know that rooting your phone still has some benefits. I wouldn't have thought that there's such an easy bypass for system apps, but I suppose it makes sense for some modem/carrier apps to specify an interface.
I absolutely love Netguard even though I don't really use a firewall in practice (I was sort of hoping a permanent VPN with some "real" traffic meddling would be enough to block most violations of my privacy). It's the one rootless firewall that actually just works on practically any device you can think of, among a sea of broken/scammy firewalls that fail all kinds of edge cases.
> It's the one rootless firewall that actually just works on practically any device you can think of, among a sea of broken/scammy firewalls that fail all kinds of edge cases.
I've had those options on multiple OnePlus phones, but they were not present on multiple Pixels. Since Pixels are usually sold as "AOSP experience with Google flavor" are lacking this feature - I am not sure if that is that feature comes from AOSP or is only present on OnePlus phones.
I've generally found them on most Android phones, but they're all over the place in the settings. On my current phone they're not in permissions, or connections, or internet setup, or security, but they're in the app details screen.
I've also seen the toggles placed in the data usage graph, the other, older data usage graph you can sometimes find via a workaround, and in a separate app that pretends to be one of those system storage optimizers.
I'm sure Android supports it at the system level but how you get to those settings is anyone's guess, really.
iOS has APIs for VPNs and “content blockers”. But as far as I know, such a filter has no access to know which process/application is trying to make a connection. Little Snitch on macOS has to install code into kernel space. (Or at least it used to; I have not reinstalled in a long time.)
The Android app you link to seems to have the functionality I think should exist as a built-in. It needs to be built-in so that non-geeks can use it.
Just as users are asked the first time an application attempts to use the microphone and are able to prevent it before it starts, they should be able to limit network access and revoke it at any time.
(I don’t think users should be necessarily be forced to approve Internet access for every app install. Just make it possible to revoke in the global Settings widget and encourage users to think about personal data and Internet access being mutually exclusive.)
Not like that. The idea is antithetical to Apple, who have said during keynotes that they've tried to avoid doing so, because what they really want is a world where the concept of "mobile data" is not limiting.
None of which is particularly effective since it's trivial to setup a legal entities that makes one game but signs a bunch of malware (or steal enterprise keys).
That would be a pretty interesting VPN service if you could easily deploy it as a docket container. Something simple that could give Little Snitch like whitelisting.
The Charles proxy iOS app doesn’t have the ui to support this, it’s clumsy to whitelist domains, but it does provide some visibility into what domains are being accessed.
You can disable app's cellular data access, but that's it, at least on Western phones. Ironically, phones for the Chinese market actually expand that setting and also allow to block Wi-Fi access.
As a Chinese user, this is the first time I heard that blocking WiFi access on iOS is China only. How confused I was when reading the comment above you, given I'm already capable of blocking network access for any iOS App.
I am aware of that option. It is on the screen I just described. That is really just for saving bandwidth where it is expensive. It is in no way intended as a security measure.
"Web browsing: Certain complex web technologies, like just-in-time (JIT) JavaScript compilation, are disabled unless the user excludes a trusted site from Lockdown Mode."
I just don't want most of the programming capabilities on the web, plain old hypertext with a bit of style is enough. There are plenty of other ways to run software on a computer than inside a web browser.
I agree half way with you, we need the web split into 2 parts, webpages and apps.
I seen some cool simulation, small apps, small games that I can just test online and not have to install them on my machine. Apple would love that we all got scared and only use installed apps from their store but the web is a decent deliver platform.
If we could have a modern subset of html and css for news websited and blogs , and the rest of js for web apps then you can have the option to turn off teh advanced settings or we could have different browsers that could focus on different things, like a website reader browser that does not care about super fast JITed JS it would not support webgl,camera or microphone acccess, it would just focus on text layout and simple forms,
and a web app browser that focuses on extreme optimizing for JS , canvas and webgl operations, camera and microphone access.
I'm having fun with Gemini exactly because it's so dumbed down that you can't do anything more than publish text
It's still very niche, but it's growing and the protocol is so simple that I'm writing software for it, specifically a multi platform browser (more like a viewer?)
Actually I browse with JS off by default and whitelist stuff, ironic since I am a web dev (or maybe the fact I know how shit web tech is is why I think documents should be documents , imagine I want to show you my blog but I make an Unreal Engine 5 app because I want some cool effects and I also want to learn this shiny tool and the marketing team wants to do some shitty things too)
You can technically achieve this, but you get a degraded experience. Most sites don't test for JS being turned off, and it's not rare to only get a blank page when viewing a site in that way.
What OP wishes for is rather an experience that decidedly doesn't use JS, similar to Google's AMP or Gemini. A subset of HTML that makes publishing possible, without moving parts.
Most (if not all) browsers allow you to disable JS, so that seems like the perfect preference for you. I know it works on Chrome and Firefox on desktop (I use the NoScript extension myself, that blocks JS by default but allows you to enable it per-site), I can imagine it works the same on smartphones as well.
I /think/ what they're asking for is a world where turning JS off is actually a real option. Currently the web essentially does not work in such a case, so while it technically exists the option to disable JS isn't actually an real option.
So what they want isn't the power to turn off JS in their own browser, but the power to turn it off in other people's browsers (at least the browsers of people developing websites).
More seriously, I guess they might want a way of avoiding sites that don't have a good no-script experience. Perhaps if there were a trustworthy way to vote on that (or detect it automatically), someone could offer an extension which puts scary red boxes around hyperlinks which point to such sites.
just seems more antiweb from Apple. they love to ruin the web and make it harder to avoid their walled app garden....it's a money ploy to fight web apps and make their little devices even less useful.
Too bad that Google does not offer this same “Lockdown Mode” as Apple does.
Instead, they (Google Play Store) removed our ability to see what “app privileges” that an app would required BEFORE we do the installation step from the Google Play Store. What we got instead was an obfuscated “Data Security” section that is pretty much always “blank”.
My flashlight app should not require GAZILLION app privilegeS nor hide that fact before I can determine whether I can safely install it, much like Apple App Store can do by doing the CRUCIAL pre-reveal of any needed app privilege(s) … for our leisure perusual and applying any applicable but personalize privacy requirement BEFORE we do the app install.
Google removed the install-time permissions dialog because they replaced it with runtime permissions. This makes sense - some users wants PayPal or WhatsApp to access their contact list, and others won't. It also fixes "permission blindness", where users blindly accept a long list of permissions because they need the app, or just stop caring because it's too much to comprehend all at once.
Obviously, this isn't perfect, especially since Google removed the internet permission and allowed all apps to access it. Allowing advanced users like us to toggle off internet access in the "App info" permission page would be a good compromise, and I hope and Android team does so to match Apple on their security efforts.
It's taken a decade, but it's pretty much moved back to the permission model that j2me had, which iOS and Android deliberately removed & sold as better UX. Seems like the original devs of j2me knew what they were doing - only the joe public's weren't ready for permission popups then like they are now. :sigh:
Fixes “permission blindness”? So, the current form of Google Play (app) Store “Data Security” section of each app being shown as “(blank)” is surely yet another form of “permission blindness”.
Google Play Store being proactive in protecting these end-users from their own form of stupidity (or “permission blindness”, as you have eloquently pointed out) is just opening themselves to potential liability ramifications instead of deferring to end-user’s responsibility of maintaining their own privacy.
I think that the term “permission blindess” is better referred to as an app having zero privilege.
And “App Privileges” should have referred to runtime permissions and should have been displayed in the first place at the Google Play Store instead of install-time privileges.
Your apps have no permissions until you allow them. If you install spyware and it wants all your contacts and files it has to ask. You simply select "no" and then remove it.
Apps would force you to consent to eg contact permissions "in case you want to share something to a contact" and then harvest all your contacts. Apps can no longer use that pretense.
Google hiding information about apps in the app store is a big problem - but its not as big a problem as not having a Little Snitch equivalent built into Android. This alone is a reason for real capital to be spent on startups in the alt-android space. Imagine a company that lets you use your current Samsung or Google or Sony or ASUS or whatever flagship phone, but with a truly open-source fork of Android with a Little Snitch built in, and security updates guaranteed for as long as you stay current with your subscription, which is like $5/mo. (Maybe that's too low). Maybe you could even wipe your device and mail it in to have the software installed if you can't be bothered to do it yourself. Or maybe even a partnership with a phone repair chain. (And if you don't want to pay the fee you can always install updates yourself manually, from source.)
> Imagine a company that lets you use your current Samsung or Google or Sony or ASUS or whatever flagship phone, but with a truly open-source fork of Android with a Little Snitch built in, and security updates guaranteed
You describe the direction CalyxOS / DivestOS are going. And of course, there's the Pixel phones on GrapheneOS which arguably is more security-focused.
I just read their homepage, and they don't have Google Play support. The requirement to run Google Play Services to access and run apps represents a serious anti-trust concern to me (and to the DoJ under any administration, I would imagine). Perhaps more importantly, I see no mention of any facility for network monitoring.
>DivestOS
Hadn't heard of this LineageOS fork, thanks. TBH I can't really tell how it differs from either Calyx or Divest. None of these tools have the top-line features I mentioned.
I’ve been using that for years but was wondering whether the documentation is current about Chrome - they offer things like disabling the JIT (nearly half of Chrome’s exploits last year) as a group policy option on Windows, for example, but it doesn’t appear that APP does anything for Chrome users other than mandatory Safe Browsing.
If Apple was really serious about this, they would add one more feature to Lockdown mode: To delete and scrub permanently and definitively all your iCloud data.
You can close the proverbially "front door" by enabling "Lockdown mode" but if that same government sends a subpoena to Apple, then they will just give them a copy of all your iCloud private data.
Most iCloud data is end-to-end encrypted; Apple doesn't have direct access to your data. In the end they do own the OS and could potentially backdoor your device, but if you're worried about that... well, Lockdown Mode is moot at that point.
Apple refused an FBI order to decrypt a phone; however they allow the FBI to access iCloud data all the time. And iMessage is not end-to-end encrypted in iCloud at the explicit request of the FBI. https://www.reuters.com/article/us-apple-fbi-icloud-exclusiv...
Which makes it all the more ridiculous that sensitive things like messages, photos, contacts, and notes aren't, even as an option. Clearly the technical ability is there.
If you care about your privacy don't upload your private data to ANY cloud service.
Even if iCloud was encrypted they still run on third party cloud providers who nobody knows what relationship they have with governments. Many types of encryption are breakable if you have effectively unlimited resources.
I've been using iPhones since the first iPhone. I don't sync any relevant stuff to iCloud. However, during previous iOS updates this sync turned itself back on multiple times (more than three times for sure).
It hasn't happened for a while but whenever there's an iOS update it's advisable to check your iCloud settings immediately afterwards and if they changed, you change them back and pray that your important data hasn't been sent to the cloud in the meantime.
According to Reuters sources, Apple abandoned plans to offer iCloud backup encryption, out of fear of government retaliation or even spawning new anti-encryption legislation.
On the other hand, GP is responding to:
> Nobody who is at risk for this is doing iCloud backups. That's something you can already turn off.
And indeed, if you turn off iCloud backups, there is no "backdoor" into iMessage. You can also set up your phone to do encrypted backups locally to your laptop, if you want that instead.
And look at all the other potentially sensitive data that is not end-to-end encrypted in the backups. Photos, notes, reminders, calendars, the list goes on.
Yes, that really does mean that Apple can decrypt your messages.
I don’t think so:
Apple doesn’t log the contents of
messages or attachments, which are protected
by end-to-end encryption so no one but
the sender and receiver can access them.
Apple can’t decrypt the data.
When a user turns on iMessage on a device,
the device generates encryption and signing
pairs of keys for use with the service. For
encryption, there is an encryption RSA
1280-bit key as well as an encryption EC
256-bit key on the NIST P-256 curve. For
signatures, Elliptic Curve Digital Signature
Algorithm (ECDSA) 256-bit signing keys are
used. The private keys are saved in the
device’s keychain and only available after
first unlock. The public keys are sent to
Apple Identity Service (IDS), where they are
associated with the user’s phone number or
email address, along with the device’s APNs
address.
It's not something that has evidence - what they mean is that even if you have iCloud backups disabled, everyone you talk to might not. The point of e2ee is that both ends must have it encrypted - not just you and the server, but more abstractly, the communication partners.
That is a novel and quite broad interpretation of E2EE. In typical E2EE only endpoints of a (logical) communication channel can decrypt messages on that channel. But E2EE does not say anything about what an endpoint can do with those messages once they decrypted them -- they could print them at the public library and leave them there, they can forward them to the FBI, they can post them on reddit, etc.
If you do not trust your communication partner to safeguard your messages, E2EE will not help you at all.
The point is that many people have iCloud Backups enabled without any awareness whatsoever of the implications, as iCloud Backups are opt-out and there is zero disclosure within the OS (only an Apple Support webpage nobody will visit).
It leads to E2E being systemically weakened, since most of your iMessage conversations will get immediately scooped up by Apple and alpbabet agencies, dragnet-style.
I understand that, I didn't mean the concept of e2ee requires the endpoints to never share it at all. What I meant was, commonly people will disable iCloud backups hoping to regain some privacy, but it does nothing because most of your communication partners use iCloud backups. Just like people who switch to eg. Protonmail - if you only ever talk to GMail users, it doesn't really give you much extra privacy.
Apple's been making it real difficult to pick Android lately. Only thing Android still has going for it is the ability to flash custom ROMs, eg CalyxOS or Graphene.
My point is that if you care about e2e crypto and privacy, you already have iCloud turned off in full, including the e2e bits, because it's a privacy minefield.
I explored installing a custom ROM on my android phone, but ended up questioning the utility of them. There appears to be many banking apps, random apps (McDonalds??) and others that will not work if the device is running a custom ROM.
That makes my phone useless to me.
Our only hope is a proper Linux phone with an Android emulation layer
You can get around that by spoofing safteynet stuff using Magisk. But yeah, it is a few more hoops to jump through and you need to be rooted which is itself not great for security.
- you can't install apps without an apple account(making the phone useless really)
- you can't download apps from outside the app store
- there is no security enhanced or de-appled version if IOS, while on android GrapheneOS and CalyxOS exist
- you are limited to Safari as a browser engine(no extensions).
Compare the actual features of Android vs. the actual features (instead of the marketing) of iOS, and it's clear that Apple doesn't care about user privacy. With Android, you get to choose which if any Google services to use. On iOS, you can't run any apps without telling Apple which ones, you can't get your location without also sending your location to Apple, and you can't practically run your own apps without fully deanonymizing yourself with banking details.
What? No, not even close.
- no way to use the phone without an account
- no way to install apps from outside the store
- no browsers other than Safari reskins.
These are all fixed by the DMA, but it will take a lot of time for things to mature, however other issues persist
- no way to put apps on the bottom of the screen
- the FOSS scene on IOS basically doesn't exist, while on android there is a whole app store for it(https://f-droid.org) this is a big point for me.
- no way to duplicate apps
- no separate work profile
- limited file mangament
- no notification chat bubbles(a pretty good feature on android 11/10?+
- no advanced apps like local terminal emulators, virtual firewalls or virtual tracker blockers(partially because the FOSS community rightly doesn't care about iOS)
- non encrypted iCloud backups(basically a backdoor into WhatsApp) or any important file medium
- CSAM Scanning inbound
And many other issues, iOS is hardly making Android hard to choose, its still locked down prison, its just a bit nicer inside now.
Since I value my privacy and like FOSS iOS is even more useless for me.
Extreme? This sounds like the way I have my computing environment configured by default (to the extent that I'm able to do so with browser extensions and whatnot).
> Most message attachment types other than images are blocked.
Who wants to bet that this reflects minimum requirements dictated for user experience, rather than reflecting what Apple are actually securing today ?
The correct model here, the one that would actually defeat these adversaries, is to start with what you can actually secure and expand from there, prioritising customer needs. This delivers security improvements for all customers, but it makes the calculus simple for Lockdown customers, whatever Lockdown allows will be OK.
Suppose today Apple has a working safe BMP reader, and a working safe WAV reader, but they're still using their ratty JPEG and MP3 implementations. As described, this feature says you can receive a JPEG attachment (which takes over your phone and results in your cousin who remains in the country being identified as a contact and imprisoned) but you can't listen to the WAV file an informant sent you because that's "dangerous"...
I find is absolutely hilarious that they've kept the images in Messages while one of Pegasus attack vector was sending a PSD file as a *.gif, which crashed Messages parser.
Yeah, apple really should dumb down that parser to just “modern” jpg/png/webp for their entire application stack. bmps and gifs shouldn’t still be used. And photoshop is a bit proprietary for apple to be rendering their files within iMessage
This seems to mimic, or at least rival, Google's Advanced Protection Program which has been running for a few years to offer similar protections to Google/Android users.
My concern about enabling this would be that I'm unsure how much this puts barriers in place to prevent the owner of an account regaining access should it be stolen by a threat actor (i.e. could this backfire on the account owner?).
It's still unclear to me how much Apple really protects against (for example) sim swaps to take over an iCloud account - and the documentation around when they'll truly insist on having something like a Recovery Key if it's enabled is sparse. It almost reads as if the right amount of begging will socially engineer access to a locked iCloud account by a threat actor with the right personal information to hand, which if coupled with Lockdown mode, seems pretty dangerous to the true account holder.
It slightly degrades some experiences, so I see why it's disabled by default. Disabling JIT JavaScript is going to make web browsing more painful. And incoming friend requests are useful because it simplifies things when two people are adding each other to their phones - one sends a request and the other reciprocates.
> It slightly degrades some experiences, so I see why it's disabled by default.
My sense is that the functionality to provide those experiences resulted in a decrease in user security and privacy when they were introduced -- and that those risks were widely-discussed and well-understood.
It's weird (although not unexpected) to see the reversal of them touted as a selling point.
If you are "a target" and going to take measures of basically disabling everything on your iPhone, wouldn't it just make sense to get a burner dumb phone?
Hasn't this been happening for years (drug dealers, anonymous, etc..)?
Think more about journalist. You need slack to talk to the rest of the team. You need WhatsApp to communicate with sources and locals in most of the world that’s not the US. Your iPhone is an important tool for your work in general - a dumb phone that can only make real phone calls and sms is not particularly close.
Phone calls and sms are also completely unprotected as opposed to chat apps with e2e.
Totally agree. I'm also concerned about the fine print, what Apple is not announcing - like, "Oh, we also updated our EULA to reflect that metadata from phones with 'lockdown mode' enabled will be forwarded to the FBI", something like that.
This does not seem to disable JS altogether, only JS JIT compilation. IIUC, JS will still be executed, although via an interpreter (which is safer) rather than via compiled machine code (which might be used to exploit memory safety bugs such as type confusion, somewhat frequent on the JS side).
Which is exactly why it's optional. Plenty of other people, myself included, look at that list and would not want them all or would like to pick and choose which subsets are locked down.
What if this isn’t a good news for 99% of Apple users?
That’s obviously an amazing measure for the 1% high targets out there.
But what about the other 99%? Does that create an incentive for Apple to strengthen Lockdown Mode security to the detriment of the regular mode (should we call it Unsafe Mode)?
I’m afraid that this architecture will make it harder to prioritize security features or fixes for the 99% users. Developers bandwidth is limited, they can’t fix all bugs. Hence if you have to choose between one bug impacting the 1% most important users (from a security standpoint) versus one bug impacting the 99% others, which would you choose?
Would such an architecture have led to the emergence of Blastdoor[1] - which attempts at mitigating iMessage attachement exploits, but is now useless in Lockdown mode?
My hope here is that by reducing attack surface, Lockdown mode will make exploits much easier to fix (as they’ll target a limited area), allowing to strengthen the system core while freeing bandwidth to implement longer term, Blastdoor like mitigations.
What if there is a little device that acts like network firewall and router appliances but somehow the phone proxies all connectivity via it. Something to carry around that shows ingress and egress connections, calls and anything in between. You can either set an allowed or blocked list, detects cell connection mitm attacks and spikes in traffic (to detect leaks). Mobile phones are like desktop computers and will always have security issues. It only makes sense to firewall them.
TLS and certificate pinning makes this a problem. Technically certificates don't have to be pinned, but if they weren't then people would use this to defeat "growth & engagement" and block analytics, ads, etc (or worse, reverse-engineer the API to make a third-party client) and we obviously can't have that.
Why not on the same device? Have a separate small simple SoC completely segregated from everything else, except shared battery, with 2 NICs and a physical switch to swap between using the firewall interface and the regular phone. Although this may make more sense for a regular computer plus router, with a cell phone there's multiple radios, not just a single simple IP connection...
Issue is that we would have to get device makers to buy into it, and also trust them that they show us everything. Also we wouldn't be able to retrofit existing devices. Most people dont like tinkering with things. A universal device small enough to fit in your pocket, with a nice little display or a usb connector to download data to a laptop and configure rules, is more desirable imo.
Had to look it up. I guess the question is how to make sure it cant be abused by capturing data from random nearby phones. In that case we’d end up worse off.
First, lets talk in the foggy dreamland of this article.
I can't imagine the threats security researchers deal with every day. And their innovative solutions. Extracting live code from samples to inject in other malware. Wow, so cutting-edge. It's wonderful to talk about, no stress there, no drama. We don't want wear and tear on our machines.
Like the article states, we're spending millions and billions on these problems.
And lockdowns are an innovative approach. I've been thankful for device lockout in the field before.
It's saved my bacon. Captures the favorite philosophy of strong regulatory control. Nice, has network effects for other political goals. Very cool.
Better than Kevorkianing or Bricking a machine in the field.
Oh God, Oh God, Oh God. Sorry for that narrative-scape shattering. Dementia is a serious issue. Ok, back to sanity.
Some really fucking smart people showed me a study on evolutionary computation and diversity in investigator-guided processes. Hand-edited synthetic organisms may be less evolutionary successful than purely evolved ones.
Like my storytelling, right? It's a fucking hot mess that isn't excersing my audiences mind as much as a more diverse author population.
Oh God, there I am breaking down story-wise. Back to stability. We have political goals like reduction of 99% of security threats. And the perfect is the enemy of the good, right?
I'm so sorry for my slips there, I know you lost time with loved ones and reading other comments talking about this in a more professional tone that captures the point.
In closing, I'd like to thank the sponsors who kept me fed for years.
> Wired connections with a computer or accessory are blocked when iPhone is locked.
Damn... if this was something that could be enabled by typing the pin in wrong, it would be the death of modern phone forensics. Actually, I would rather this be the default after a device is powered on... let me "restart in non-safe mode" when I need it.
> Web browsing: Certain complex web technologies, like just-in-time (JIT) JavaScript compilation, are disabled unless the user excludes a trusted site from Lockdown Mode.
That's very cool actually. You can keep JS enabled but choose to make it run more slowly in exchange for better sandboxing
I originally understood this to mean JS was disabled entirely in safari when enabled unless a site is allowlisted. Does this mean the web will run JS “normally” but slower? Does the speed of modern phones mean a slower style of JS processing might be less discernible?
That's my interpretation. Modern JS engines have multiple tiers of optimization, which they apply in different ways based on how "hot" a piece of JavaScript is. JIT is the highest level of optimization, but also means generating and executing native code on the fly, which I assume leaves the door open for worse exploits if there's a bug in the engine. This is in contrast with bytecode interpretation, which is slower but available.
Could a security expert enlighten me: is Windows more secure today than macOS, if we purely take OS-level and hardware-level security measures and ignore subjective factors? (like marketshare, attractiveness of targets, etc.)
Windows has all sorts of buzzwordy-sounding security features: Microsoft Defender Application Guard (Hyper-V for untrusted websites & Office files), kernel virtualization-based security (VBS), Code Integrity Guard, Arbitrary Code Guard, Control Flow Guard, and Hardware-enforced Stack Protection.
It's extremely hard to compare the two on a deep technical level (beyond "modern OS's are safe, install updates, you'll be fine") without having deep security experience. Any professional insights?
There is no meaningful difference if you are a modestly attractive target. The prevailing level of security is such that a single technically competent individual with a year of time can completely breach the commercial IT systems of any Fortune 500 company in the world and steal essentially all of their internal documents and IP and materially disrupt their operations. That is the maximum level of security in commercial IT systems.
So, if you have nothing worth more than ~$1M and indefinite disruption of your systems is worth less than ~$1M, then there might be a meaningful difference. However, basically every business is beyond those levels, so quantifying the differences in a professional context is kind of like discussing whether a tshirt or a single piece of paper provides more protection against a gun.
This is great. Here in Australia, when you pass through the border, the goons can ask you for your phone, computer, devices etc. without a warrant. They’re not allowed to compel you to hand over passwords, PINs or have you unlock it for them (without a warrant) though, but apparently they’ll often imply that you have to, and if you don’t they can confiscate the devices for some time.
This mode sounds excellent, because all they can do without a warrant is try and attack it with a Celebrite or Graykey device, so having the extra protection from physical connection and other attacks sounds awesome.
I expect to always enable this while I’m doing international travel anywhere.
I was threatened with 10 years imprisonment if I did not hand over my passcode at SYD airport in July 2018. I asked to contact a lawyer and I was told I have no right to do so. I am an Australian citizen.
>Web browsing: Certain complex web technologies, like just-in-time (JIT) JavaScript compilation, are disabled unless the user excludes a trusted site from Lockdown Mode.
This should be ON by default. It would force webdevs to write efficient websites.
Does this offer any protection after you are already pwned? Is the expectation that you have it permanently on if you are a high value target or do you turn it on temporarily before clicking on a link for example?
Don’t know enough about iOS to say for sure about persistence, but recent Pegasus (NSO Group spyware) versions don’t bother[1], instead repeatedly exploiting bugs starting with “features” like background Messages attachment parsing.
Those are the kind of threats Lockdown Mode finally acknowledges — targets (well IMO everyone) would need it permanently enabled.
Otherwise the temporary protection before clicking a link can be had today in other ways, like disabling Settings > Safari > Advanced > JavaScript.
If you have already been pwned, the OS is compromised so it clearly is not able to retroactively undo that - any checkbox, option or whatever can just be turned into a no op that lies.
If you're already pwned to the point where they have kernel-level access and can bypass code signature enforcement, all bets are off. Even if lockdown mode interfered with their activity, at this point nothing prevents them from modifying the Settings app to not really enable lockdown mode even if you request it to.
If you're going to run a crippled-ass phone to protect yourself, because the regular phone is so fucking insecure, why even bother with a smartphone? They'll just find an exploit in something that the "security mode" hasn't disabled.
This is mostly great news. Then you scroll down a bit and see this eye-opening 2nd part:
“Apple is also making a $10 million grant […] to the Dignity and Justice Fund established and advised by the Ford Foundation - a private foundation dedicated to advancing equity worldwide and designed to pool philanthropic resources to advance social justice globally.“
So Apple is releasing a great new hardened security mode in iOS, AND… they’re donating money to collectivist activism? What a bizarre combination. One step forward, two steps back.
This feature is really fantastic, and it re-affirms my commitment to using Apple devices due to security in preference over Android. The only thing I could see that would be a superior alternative could perhaps be something like Graphene. Already today I locally set up a profile via Configurator in order to ensure that my phone can't be hijacked by some local attacks, the work that is happening Lockdown is even better and I'll be enabling this as soon as it becomes available to me.
Personally, one of the major things that lead me to start investigating Graphene and other alternatives was the on-device scanning. Fortunately, it seems Apple has backed away from this position (at least for now), and unfortunately we live in a world where if you need a mobile device there's a dichotomy. Given that, I think sticking to Apple is still the best choice for me right now, and aligns most strongly with my threat model.
I travel internationally a lot (or did before the pandemic) and my primary concern is when crossing borders. I always power off my devices to prevent warm-boot attacks, and take other precautions, this lockdown mode would be a big win in this case. Even with the other protections Graphene offers over regular AOSP, I am concerned that Android doesn't really provide meaningful ways to prevent attacks where the adversary has physical access.
I think this is a great feature and a huge step in the right direction.
However, will this comply with the new EU Digital Markets Act (DMA), which provides a general interoperability for messaging apps (for big players, with Apple and iMessage being certainly one of them)? Certainly, foreign services – or at least parts of their features and APIs – had to be disabled in lock-down mode. (Will it help to any degree that Apple is drastically limiting their own service?)
> Messages: Most message attachment types other than images are blocked. Some features, like link previews, are disabled.
NSO Group's zero click exploit for iphones involved images. specifically using the PNG compression algorithm to use the logic gate sequence of the CPU to compile and execute a new process that allowed for escape
If that's the level of sophistication this is to guard against, then it seems like it should include a block of images
I wonder if this mode would be helpful to protect myself if US border control forces me to unlock my phone so they can make a copy of all of my phone contents.
If you are a US Citizen or Permanent Resident, Border Patrol cannot prevent you from entering the United States. They can, however, detain you for up to 72 hours and confiscate the locked device if they have "reasonable suspicion". The confiscated property will be returned eventually.
If you are not a US citizen, refusal to unlock a phone and allow inspection, inclusive of allowing access to social media and corporate apps, will probably result in denied entry. They also have the right to detain you until indefinitely until you unlock the phone if they have "reasonable suspicion", but requires a court order within 72 hours.
Most foreign counties have similar rules in place for residents and non-residents.
They don't usually return the devices they steal, and most people travel with a total device value lower than the cost of an attorney and lawsuit to force the return.
The sterile area between the gate and the border control is treated as international waters/lands, which sounds fine, and IIUC there is the logic that laws don't apply there so you can be forced-forced anything free from constitutional protections. Not sure if that actually works though.
It would be a good idea to enable this before going though any border controls. Doubly so for countries that require apps to be installed before entry/upon entry/after entry.
ArriveCAN (Canada), Mobile Passport Control (USA), WeChat (China), and other mandatory government apps would be perfect vectors to stage highly targeted attacks.
Is the apple bounty program still terrible in terms of payout and length of time to approval?
I can’t see many people submitting bounty reports if it’s too much of hassle or not worth the effort.
Since the apple ecosystem is mostly proprietary, it’s hard to gauge as individuals if this just provides a false sense of security or not against “state actors”.
Sounds like a plan to make iOS the default for highly-placed government employees. Maybe that's already the case, but I thought I remembered that Obama had to have 2 phones, and the "secure" one wasn't an iPhone. Anyone have any more knowledge about this?
I'm guessing it isn't, if only because this feature completely disables MDM (which you'd need in government or business to do things like remote wipes or passcode policies). It looks to be designed for people that are possible targets to use on their personal phone, which shouldn't have work data on it.
(Of course, they could make some new MDM policies to individually turn these features on. You can already block external devices with MDM, and you can completely disable FaceTime/iMessage/iCloud. It wouldn't be much of a jump to add the more granular protections this has.)
I think you’ve misread this announcement: it doesn’t appear that MDM is disabled. It merely looks like you cannot change MDM settings, including enrolling, while this feature is active.
At least at the start of the Obama Administration, he was known to be hooked on his Blackberry [0], and I know RIM did a lot of work to provide secured devices to government officials. I don't know what government officials are using since RIM went under though.
Computer security is notoriously difficult, but at the same time, none of this is magical, this is meticulous hard work, and with enough time, skills and money I don't see how you can't plug all the holes.
At least the remote attack surface does not seem to be that huge...
That's the thing, if you think your device is compromised, don't use it. This is dangerous as it's a bandage and most likely allows surveillance that's "pre-approved" or is carrier based, probably even baseband modem based.
Apple is not stopping state-sponsored anything. They do not have the expertise nor willing to invest enough to stop it. And they also turn everything over they can at a local-law enforcement request, because they have to.
It's good I guess, but I will not convince myself that a button saying "Lockdown mode" will casually side-step the entire legal and surveillance machinery built up in the U.S.
Firewalls like Little Snitch may not be enough against actors like NSO (that exploit unknown zero-days), tbh. The mechanisms to enhance protection does need to come from the vendor (Apple). This lockdown mode, for all its present shortcomings, is moving the needle in the right direction, imo.
I see they're running the reality distortion field at full power.
This is a load of bullshit and marketing hype. They are letting you turn off features for security reasons, i.e. what basically every OS has let you do, and what every half-competent IT department has been doing, for decades. In fact, iOS was an outlier in how unconfigurable it was, and with the pitiful MDM options not letting you turn off many of these features that are constant sources of vulnerabilities and social engineering.
Nothing that novel here other than the framing and cybersecurity marketing bullshit about Nation State Actors and "mercenaries."
Because it's being made to sound like something it's not. The comments are full of people fawning over how innovative and groundbreaking this is. Just trying to offer a dose of bitter reality to bring people back down to earth.
To what end? What new insight is gained from such a reframing?
I personally don’t think the individual features are as interesting as the overall framing and the fact that Apple is publicly announcing their intentions. The feature set will doubtless change over time - such is the nature of any software endeavor - but starting that journey is the interesting part.
Getting stuck on “but it’s just xyz dumb feature…” or “but they should have done x long ago”, etc. just obscure the more interesting fact that they’re explicitly embarking on this path to begin with.
ironically if this is successful we likely won't know how deep the hardening goes, unless security researchers who have legit access release something?
but i'd love to know the series of changes that goes into something like this if its not just hype, potentially unpicking a giant complex system from the bottom up to remove attack surface.. interesting challenge!
> Messages: ... Some features, like link previews, are disabled.
I've been wanting to disable link previews for YEARS!! Not for security, but to keep those corporate advertisements (aka previews) out of the conversations I have with my friends and family.
It feels super disingenuous when I type out an articulate, heartfelt, personal message to my loved one, character by character, anticipate their reaction reading it, and then hit send — only to find the URLs expanded 400 pixels into corporate advertisements designed by the bonehead SEO jerks who care about clickbaiting over content.
Apple cannot even in theory protect you from spyware, because Apple's OS and apps _are_ spyware - as Apple (routinely? occasionally?) collects your personal data for the US government's NSA and passes it to them (Snowden revelations: https://www.theguardian.com/world/interactive/2013/nov/01/sn...)
This might get downvoted but it's actually true. If you're logged into iCloud, even with all features disabled, things like your call history and email recipient history (regardless of whether you're using iCloud Mail) are uploaded for example.
I'm rather baffled by how all of that can just go over people's heads, and they go back to debating whether Apple is mindful enough of their security and privacy or not.
From press release, “Bounties are doubled for qualifying findings in Lockdown Mode, up to a maximum of $2,000,000 — the highest maximum bounty payout in the industry.“
Appears Apple is not aware there was a $10 million bounty [1] paid out; unless when they say “by industry” they mean phones, not bug bounties.
If Apple really believed it was secure, then even a $100 million bounty shouldn’t be a concern; 2 million, while clearly high, is no longer enough to pull in the best bounty hunters, in my opinion.
///// Re: Naming
Name conflicts with existing terms both Apple and consumers use. Naming should be unique so it’s possible to Google the unique name for this feature and only get valid search results.
///// Re: iCloud
While iMessage features are limited, it is neither blocked, nor is iCloud — and both are known to being vulnerable to nation state demands on Apple due to iCloud not being end-to-end encrypted.
///// Re: iCloud end-to-end encrypt
If Apple was serious about the topic, they would have already rolled out end-to-end encrypt for iCloud years ago.
///// Re: Targeting
If Apple is logging if this feature is on and sending it back to Apple, it will result in targeting from nation states even if this feature is “invincible” - which I have no reason it is; basically, nation states demand list of users subject to its jurisdiction.
///// Re: Off vs Locked
“Wired connections with a computer or accessory are blocked when iPhone is locked.” — Why is this not the default with an opt-in? Further, at the point you’re turning on this features, when locking the phone it should explicitly tell the user of the risk of locking vs turning the phone off. Lastly, when you turn an iPhone off, it should really be off if set to this mode; if it is, and activity is detected, likely good sign something is going on.
Self-managed MDM is the way to go for most of them. I think the main one that can't be achieved thru MDM is the browser lockdown. MDM has a lot of other security policies available though.
These two sound like good defaults for all iPhones.
> Messages: Most message attachment types other than images are blocked. Some features, like link previews, are disabled.
> Apple services: Incoming invitations and service requests, including FaceTime calls, are blocked if the user has not previously sent the initiator a call or request.
What does it even mean to be a state-level actor? For me this is the same kind of bullshit/PR language that is is used to sell so-called "military-grade" artefacts.
This is nonsense. Security breaches can be discovered and used by anyone with the right knowledge and skills. Geohot was not sponsored by the CIA or the FSB.
I think they're focusing on the notion of protecting against well-funded mercenary firms with the resources/time/ability/motivation to target specific individuals with specific exploits. I have a hard time believing that anyone would enable this Lockdown Mode _prior_ to being owned though.
> I have a hard time believing that anyone would enable this Lockdown Mode _prior_ to being owned though
I can imagine many use cases where they would e.g.
journalist enabling this before working on an article that was critical of a foreign government. Or any government contractor, NGO, embassy worker etc.
I know what the label is, but from a strictly technical point of view, a breach is a breach, an exploit is an exploit, this is not like AI research where you need server farms.
From my own experience and from the postmortems I have had access to, most of the time this a work of patience and skills.
In my opinion, this is a way of saying to the public that their devices is secure against everything but state actors, this is simply not true.
Well the last phone I got, it was directing me to AirBnB's with masonic ties who liked to tell stories of violence. I had my first masonic lesson of keeping my mouth shout or be killed when I was at primary school, but I dont care now, life is overrated!
> USB Restricted Mode prevents USB accessories that plug into the Lightning port from making data connections with an iPhone, iPad, or iPod Touch if your iOS device has been locked for over an hour.
Android asks every time for every device. There is no 1-hour grace period.
If Apple could somehow make phone and sms not useless due to spam that'd really save the average person. They must have the resources to throw at something like this. I'm not claiming to be an expert, I'm not saying I'm right, but phone spam is fucking awful.
This seems to be a problem mostly localized to some countries. Device manufacturers should not be fighting a rotten network, the networks should be fixed instead.
Yeah but... here we are. In the US at least, I don't see this ever being addressed at the root. Everything between the user and the phone service is at least somewhat malleable, what's the problem with at least trying in one of those places?
> If Apple could somehow make phone and sms not useless due to spam
1) A full solution to this problem is going to depend on mobile carriers making changes. It isn't something which Apple can unilaterally fix.
2) This is completely irrelevant to the purpose of "Lockdown Mode". It's intended to protect high-risk users from certain sophisticated threats -- it isn't a feature which most users should use.
Surely that's the responsibility of the providers, though? Apple can improve the situation a bit, maybe, but you'd really need to get AT&T & co to crack down on it to have any chance of solving it for good.
I know that I've had approximately zero spam on my German number (that I've had for ~2.5 years) - I'm sure why, whether I'm just lucky, or whether it's much more under control here. My UK number definitely had problems with spam, though. Maybe a couple of spam calls a week.
Nice, glad to hear it's at least reasonable elsewhere, It's very, very bad in the US, at least for my partner and I. We started getting unsolicited calls days after starting the house buying process because the credit reporting companies sell you off immediately. Very frustrating.
I do not know why anybody would believe any claim by Apple with respect to security without overwhelming empirical evidence supporting their claims. The default assumption in commercial software security, supported by literal decades of abject failure by every player, is that commercial software security is atrocious. To claim anything more than trivial security is a extraordinary claim and thus demands extraordinary evidence before being accepted.
Apple has demonstrated no such evidence. In fact, the opposite is the case. Despite decades of assurances that their systems provide meaningful security, every single year we see their security torn apart by individuals and small teams with budgets that do not even constitute rounding errors to a Fortune 500 company. There is exactly no reason to believe they have meaningfully superior technical expertise with respect to security relative to the default standard of the industry.
However, this should be no surprise to anyone as the security certifications that Apple advertises for iOS [1][2] are only “applicable where some confidence in correct operation is required, but the threats to security are not viewed as serious.” [3][4]. I mean, look at [4], the process used to certify their security is that their evaluators typed search terms into the internet and verified that every vulnerability that turned up was patched, that’s it. There is no requirement to even do a independent analysis that it protects against attackers with a basic attack potential, that is done at the next higher level of security that they could have chosen to certify against, but did not.
To be fair, Apple has historically demonstrated the ability to certify against AVA_VAN.3 which demonstrates resistance to attackers with a enhanced-basic attack potential, but they have failed every time they have ever attempted to certify against AVA_VAN.4 which demonstrates resistance to attackers with a moderate attack potential. It should be no wonder that they can not protect against moderate attack potential threats such as individuals or small teams, let alone high attack potential threats such as large organized crime and nations.
If Apple wants their security claims to be taken seriously, they should start by demonstrating their ability to protect against moderate attack potential threats via the internationally recognized security certification process they already use and advertise. Until then, the only thing we should trust is what they certify they can do (protect against script kiddies), not what they have failed to ever achieve in a auditable manner (protect against moderately skilled attackers).
For the vast majority of users the most realistic threat is simply being ordered to unlock their phone under the threat of force (from a criminal, a cop, a CBP agent, etc). This is way, way more likely than being attacked through an unknown JIT compiler vulnerability.
What would be really helpful is Apple implementing a way to have multiple iPhone profiles with plausible deniability (a la VeraCrypt) or some sort of compartmentalization (a la 1Password travel mode).
Of course that would mean people can start sharing their phones instead of buying one per person from Apple, so I'm not holding my breath.
Honestly, this is bad news, because it means Apple is no longer capable of offering both security and all features, but now needs to spit them into groups, presumably because they need to keep up with (the clearly less secure) Android...
Security has never been "Secure or not" proposition, it's always a balance between convenience and safety against threats, threats that change depending on who you are, and who is targeting you.
Some features are (understandably) almost impossible to make very safe. Take PDF viewing for example, the entire thing is so huge, that it's bound to be holes in any implementation, just like what the NSO proved some time ago with the iMessage exploit.
I take this effort as something similar to the "Hardened Linux" effort. Just that it exists doesn't mean that Linux is "unsecure", it just means that if you really need to, there is more steps you can take to make it even more secure. Just like what Apple is doing here.
Security is always a tradeoff and there is no single answer. A feature for one person is another person's hell.
An acquiantance just lost all their data because they had enabled "format on too many missed passcodes" and their kid was playing with their phone.. caused quite a few tears. On the other hand, that feature is invaluable to international travelers.
What a strange implementation of "format on too many missed passcodes". Apple (on iOS and watchOS) implements this, but after some amount of failures, phone gets into progressively longer lockdowns. So maybe after 3 failed attempts you have to wait 2 minutes, after 4th 5 minutes, and before the final (formatting) attempt you have to wait something like 12 hours. This prevents "kid playing with the phone" problem.
Security by reducing attack surface is a standard, and sensible response.
What you are asking for is that Apple (or any company) be able to produce absolutely 100% bug free code, no matter the complexity or requirements. This feature is an acknowledgement that what you're asking for is an unreasonable demand for any company.
So Apple has looked at the attack surface present by default, and then provided an option to that trades off removing presumably low use features in exchange for removing large attack surface. That is a trade off: for example any modern phone would be vastly more secure if all it could do is make phone calls, and everything - the browser, apps, etc - were disabled. But that end of the spectrum results in an impractically restricted device, in reality there's a middle ground, but for high profile targets the trade off is closer to "just a phone" than it is for normal users.
An example is the RW^X region required to support JITting JS - the OS simply supporting such memory region at all was a huge addition of attack surface to the platform - prior to that every single executable page was protected by code signing, afterwards there was a region that by definition the OS could not verify, and it has been used by every attack since then. But disabling that simply disables the JIT, the JS interpreter runs, so the impact is only that some web content runs slower, but the functionality itself is still there.
Similar for messages: receiving JPEGs is super common, receiving OpenEXR or whatever probably isn't, so removing everything other than JPEG by default again removes attack surface without realistically impacting the usability of messages.
I see this as securing against "unknown unknowns". No software can ever be "100% bug free". If you can identify areas that are more likely to contain yet-undiscovered vulnerabilities and turn them off in advance, the device becomes more secure.
Honestly, this is bad news, because it means Apple is no longer capable of offering both security and all features…
Absolutely not true.
There’s a difference between being secure and having all of the features and being secure against a state-level attacker. The vast majority of users are quite secure while enjoying all of the features of their iPhones.
For those who are being targeted, potentially in a life or death situation, being able to send attachments in iMessage is trivial by comparison. Only a tiny percentage of iPhone users should ever have to enable this; it won’t impact the user experience of over 95% of iPhone users at all.
Security and convenience _can_ coexist, but you can't transition into a more secure world without breaking convenient, insecure stuff that already exists and users expect it to just work. Later they can ramp this up.
And yet this feels like it’s too little too late. If I’m likely to be the target of the kind of state-sponsored malware “lockdown mode” supposedly protects me from I shouldn’t have been using Apple products in the first place. Which begs the question: what are current security best practices to protect from state-level hostile actors?
Really? Not something like Tails or Qubes? Am I too paranoid? I’m genuinely interested in learning about this. What am I supposed to use these days when I’m working on a project that would make me a target for state-level actors?
I'm guessing it will run afoul of the EU regulations. At the bare minimum there should be a way for level playfield - individual applications and third party application providers should have same access as Apple's apps!
* If Safari and Messages is allowed then all other apps should be allowed and have complete access to the device even in the lockdown mode.
* If apple gets access to any traffic from the device in the lockdown mode, then all other applications should have full access to advertising metrics and device data as well.
At that point it's probably not much of a lockdown, but Apple can't have all the fun can it?
Marketing vaporware. The attacks used by NSO and other groups were exploits. You have two choices, either do not use the device with any external connection and not be exploited, or use the device with external services and be exploited. You cannot have it both ways. The underlying software will have to be written in a safe programming language, along with significant scrutiny on it's logic.
We will never have this, because we are ever decreasing the quality of software development. Apple could do something about this, but that is a politically and financially expensive move for no extra financial gain.
As always people will eat this up, and business will continue unaffected as usual.
So Apple is saying that their "Lockdown Mode" protects against "highly targeted cyberattacks from private companies developing state-sponsored mercenary spyware".
That's an interesting wording, because it claims to protect you against... nothing that matters. Notably, it doesn't protect you against:
- The police. Don't get me wrong, I am all for letting the police do its job fighting crime, even if it means hacking iPhones, but even if you got the police attention for a noble cause, Lockdown Mode won't save you, at least, it doesn't claim to.
- Foreign governments, as well as your own government. Notice how it mentions "private companies" specifically, as in, not public. And the cyberattacks themselves have to be performed by private companies, if the tools that these companies develop are used by government entities, it doesn't count.
- Cybercriminals, the kind who are after your money. They are not "private companies", and they are usually not state-sponsored.
- Terrorist organizations, mafias, drug cartels, etc... again, not "private companies", and while they may be backed by states, they typically work for themselves.
The technical aspects have value, and I think giving the user the choice of wearing a tinfoil hat is great, but the claim they are making is deceivingly weak if you read carefully.
Cybercriminals are often state sponsored and do a bit of government hacking and their own hacking.
This is kind of a silly comment.
Making a phone harder to hack will help with foreign (and domestic) governments that buy exploits to target individuals as well as hackers trying to make money off of similar exploits.
I am not saying Apple did anything wrong with their solution, I think it is great, really.
I then thought: isn't that something that terrorists and pedophiles will love? If it is really that effective, I expect that soon enough, we will stories about Apple helping very very bad people who are after your children, and I don't think Apple wants to be associated with crime, and I was wondering what would Apple strategy be.
And that's when I noticed that very specific claim "highly targeted cyberattacks from private companies developing state-sponsored mercenary spyware", what are the words "private" and "mercenary" doing here? They do nothing but reduce their claims?
I am not calling conspiracy here, and I really think what Apple did is great, but I suspect that specific wording is Apple being cautious, probably of potential association with crime.
> They used an "invisible 0-click exploit"; where you don't even actually receive a text message or need to click any links or attachments.
AFAIK, yes, because Lockdown Mode disables any non-audited plugin code from running in response to the receipt of an iMessage message (which is what "disable formats other than images, link previews" et al really means under the covers.)
This is a huge step forward for iPhone users. Look, I get it. From the typical HN perspective, this potentially looks like a lot of hype. But many of you aren't looking at from a high level.
In the world we are now living in; even what's happening in the United States right now, being able to protect yourself from well-funded, determined attackers for the average person couldn't come at a better time.
There's a huge gap between Fortune 500 executives, government officials, etc. and regular people in terms of the resources available to them to prevent state-sponsored attackers. It doesn't take much these days to go from a nobody to being on somebody's radar.
If you're a woman seeking an abortion in a state where it's illegal or severely restricted, you could be the target of malware from your local or state government or law enforcement. In Texas, you can sue anyone who aids and abets a woman who attempts to get an abortion for $10,000, which is enough to get someone to trick someone into installing malware on a phone.
No, it's not China or Russia coming for you but it doesn't take much to ruin someone's life.
I don't think this is virtue signaling or marketing hype by Apple; if anything, this is right in alignment with the stance they've had on privacy for years. Even for a company the size of Apple, putting up $10 million to fund organizations that investigate, expose, and prevent highly targeted cyberattacks isn't pocket change.
At the end of the day, this is all good news for user privacy and security going forward. I also suspect if I lockdown my iPhone, my other compatible devices using the same Apple ID will also lockdown. No IT department required.