Hacker News new | past | comments | ask | show | jobs | submit login
Apple previews Lockdown Mode (apple.com)
1577 points by todsacerdoti on July 6, 2022 | hide | past | favorite | 742 comments



Let's not let the perfect be the enemy of the good.

This is a huge step forward for iPhone users. Look, I get it. From the typical HN perspective, this potentially looks like a lot of hype. But many of you aren't looking at from a high level.

In the world we are now living in; even what's happening in the United States right now, being able to protect yourself from well-funded, determined attackers for the average person couldn't come at a better time.

There's a huge gap between Fortune 500 executives, government officials, etc. and regular people in terms of the resources available to them to prevent state-sponsored attackers. It doesn't take much these days to go from a nobody to being on somebody's radar.

If you're a woman seeking an abortion in a state where it's illegal or severely restricted, you could be the target of malware from your local or state government or law enforcement. In Texas, you can sue anyone who aids and abets a woman who attempts to get an abortion for $10,000, which is enough to get someone to trick someone into installing malware on a phone.

No, it's not China or Russia coming for you but it doesn't take much to ruin someone's life.

I don't think this is virtue signaling or marketing hype by Apple; if anything, this is right in alignment with the stance they've had on privacy for years. Even for a company the size of Apple, putting up $10 million to fund organizations that investigate, expose, and prevent highly targeted cyberattacks isn't pocket change.

At the end of the day, this is all good news for user privacy and security going forward. I also suspect if I lockdown my iPhone, my other compatible devices using the same Apple ID will also lockdown. No IT department required.


> In Texas, you can sue anyone who aids and abets a woman who attempts to get an abortion for $10,000, which is enough to get someone to trick someone into installing malware on a phone.

Anecdata for people who think this is unlikely: my wife had an issue getting unclaimed property back from the state of Texas and hired someone who advertise the ability to help. She turned out to be a bulldog with a ton of knowledge of the necessary bureaucracy. She put hours per week into it on our behalf for months, through many rounds of filing paperwork and then hounding bureaucrats on the phone by telling them exactly how and why we could sue if they ignored it. She did all that for a cut that was a fraction of the $10k abortion bounty. The $10k might seem like a symbolic gesture, but it will spawn a cottage industry of bounty hunters. No doubt most of them will be ideologically excited wannabes who quickly give it up, but some will be dogged and effective and will cultivate an expanding repertoire of skills. It's a terrifying prospect.

There will be many, many people who never previously entertained the idea of getting involved in serious criminality who now need protection from the prying eyes of the state and their fellow citizens. To look at it from a cold and opportunistic viewpoint, this could change the public perception of digital privacy from being just for dangerous creepy people to something that everybody should value.


To add to this: the whole point of the private right to action is so that anti-abortion groups can target individuals in order to create precedent-setting cases. This is a mechanism that is designed to be used by well-funded groups. The threat model here isn’t some rando deciding they want to sue you, it’s a team of determined lawyers that absolutely will take your case as far as they possibly can.


> the whole point of the private right to action is so that anti-abortion groups can target individuals in order to create precedent-setting cases.

Fairly sure this is wrong. The point was to create a mechanism to sue various people "in orbit" around an abortion without involving state officials. This was supposed to "immunize" the process from any Roe v. Wade-related block.

With Roe v. Wade now struck down, Texas can basically do whatever "it" wants w.r.t abortion, and the federal government cannot intervene. SB8 at this point is possibly (just possibly) a way to reduce state spending on abortion legal cases, but not much more beyond that.


You're right.

It's directly (and I believe explicitly) modelled on the Americans with Disabilities Act. The ADA creates a model in which private citizens can and do bring lawsuits against all types of organisations for any type of harm they can define.

This has spun out a cottage industry of disabled people who's full time occupation is visiting everything from websites to restaurants, being harmed and bringing lawsuits. While that may sound like a bad thing, it is in fact a very very cost effective way of enforcing the law quite effectively without bureaucratic bloat. Strangely, it's been quite successful. The history of why this decision was made is very interesting.

For all your devs, this is why large American companies care so much about accessibility on their websites - because it creates an almost unlimited liability on their end if you do it badly. Companies now scan websites for accessibility as soon as they're launched, then others will buy the set of companies which 'fail', then visit those sites in order to be harmed. It's an interesting little cottage industry which keeps legitimate disability rights enforced quite nicely without too big a government.


The purpose of the private right to action was to get around Roe/Casey prior to the Supreme Court overruling both cases. The law was specifically designed to evade judicial review.

As a private plaintiff, you can typically sue a state official that is charged with enforcing a law in federal court on constitutional grounds. SB8 is written in such a way that state officials are barred from enforcing the law. Thus, it is effectively impossible to challenge in federal court because there is no state official that enforces the law, only private citizens, and thus there is no proper defendant.


What makes you say that?

My impression is that it fits the pattern of trying to disrupt society and government and create a vigilante citizenry, similar to encouraging people to arm themselves and use their firearms to prevent crimes.


[flagged]


No idea what you're even trying to reference in your second sentence, but the first sentence "community law enforcement" is a red flag in my book. The law creates a fiscal incentive for people to report their neighbors for actions that were federally protected at the time this law was passed. Neighbor vs. neighbor. Citizen vs. citizen. We spend more on policing than any country in the world and yet still need to deputize citizens in a heavily armed state? It's not my neighbor's damn business to know if someone in my household seeks an abortion.

If fiscally incentivizing vigilantism isn't dystopian I don't know what is.


Deputization and vigilantism are antonyms, your framing is incoherent.

An elected legislature sanctioning civil action is "dystopian", but rioting and arson? Intimidating judges at their homes? Laundering a decade of domestic terrorism into universities and district attorneys' offices? Never heard of that stuff!

Not surprising to me, just absurd.


Nobody understands why you are talking about these other crimes. ?


Because the pearl-clutching over "turning citizens against one another" is proven disingenuous by the rioting.


Protests are typically more of a unity thing.


The discussion was about abortion in the context of digital privacy. You are the one who brought up all of these other things, which have nothing to do with the topic at hand. It's whataboutism and not worth engaging with.


Vigilantism without oversight, checks and balances quickly devolves into posses terrorizing and brutalizing people they simply don't like. The US has a long history of this.

Vigilantism is not the same thing as community-led policing from members of said communities.


That's nice, but none of this has anything to do with "vigilantism." The other guy only used that word because he thought it sounded scary. If either of you knew what it meant you would understand that government-sanctioned action, for example a lawsuit with standing provided by statute, is the complete opposite of vigilantism in its entire definition.


> The other guy only used that word because he thought it sounded scary.

Argument by disparagement. Popular, but it has no force in reason.


Once you start using words to mean their own opposite, your claim to reason flies out the window.


> community law enforcement

I've never heard that term. Usually, the state has a monopoly on violence and justice - that's a definition of a sovereign state. Law enforcement is performed by police. Not sure where the other stuff you mention comes from.


I've been warned before by dang here on this site not to spew Anti-American propaganda (that was pre Jan 6, I think), but I never did such a thing. When I studied in SF in 1999, I freaking loved it. But I've seen some things since that are deeply troubling. It seems more people are catching up now to what I observed: if you still think that the US is a modern western democracy with reasonable values, wake up. I mean, people hunting other people who need an abortion for $10K? How can you read that and not have a cold chill running down your spine?


> if you still think that the US is a modern western democracy with reasonable values, wake up

One of the quirks, and ongoing debates, of the US is the strong deference to states’ rights. Don’t confuse US law with Texas law. The majority of the population of the United States actually lives in states with abortion laws that are more liberal than what you’d find in the EU, for example.

The state versus federal distinction can be very confusing to people who view US politics through the lens of the worst news stories that come out of every state. The entire US has a land mass and population on the same order as that of the entire EU, and many states have populations similar to that of individual EU countries. We have a single state (California) that has an economy larger than all of the UK combined and almost as large as India.

The United States is big and diverse. We’re going through a phase where federal power is being reduced due to politics and some of the states are doing weird stuff. If you only view the US through news stories and imagine the US as a conglomeration of all of the worst and weirdest news stories from individual states, you’re going to have a very negative view of the US in general.


This kind of reasoning is exactly the problem that the US faces. "It's not really that bad, it's just a few silly states, overall we do know better". First, Texas is a pretty big state, too. You cannot just discount it as not mattering to the overall picture. Second, you are ONE country, you have ONE president. And what the majority of Americans think, doesn't seem to matter when it comes to the law, or to elections. Keep telling yourself that's it's not that bad because it's so diverse, and soon it will be much less diverse than you can imagine right now.


> Second, you are ONE country, you have ONE president.

We also have fifty governors, 100 senators, 435 house reps, nine supreme court justices, and countless state legislators. We do not live in a dictatorship. Yet.

Of these, it's the court that has changed most wildly over the past 8 years.

> soon it will be much less diverse than you can imagine right now.

I think it's possible to say "overturning Roe v. Wade didn't make abortion illegal in California, as your worst case presumption might assume" and still believe that GOP gerrymandering, Supreme Court appointments, and attempted coups are an existential threat to majority rule.


Of course nuance is important when thinking about solutions to the problem. But if a substantial portion of the country (I don't know, is it 30%?) is basically not democratic anymore, you better be quick with coming up and implementing a solution. And how exactly is a solution to look like then without a civil war?


As a point of clarity, those state laws are all passed by democratically elected representatives. Gerrymandering may impact outcomes, but Ds are every bit as good at it as Rs. For example, Oregon's legislative districts are comically gerrymandered by Democrats.


Well said. Definitely agree that it’s a ridiculous to say “well those other states aren’t that big of a deal.”

For the 4.5 million of us in Louisiana, the current laws are a pretty huge deal. But according to him we apparently don’t matter when having a national dialogue.


For the 4.5 million of us in Louisiana, the current laws are a pretty huge deal.

Yes, but the idea that those laws are being imposed on an unwilling population by an extremist minority is wrong. Half of Louisiana residents believe abortion should be illegal in most or all cases (https://www.theadvocate.com/baton_rouge/news/article_4973b4e...), and many in the other half likely support restrictions that weren't allowed under Roe/Casey. This is what democracy looks like, and an example of how democracy isn't always a good thing.


The margin of error is 5.8% I.e. the majority could also be in favor of abortion access. And even if we concede most want it gone, a slim majority does not in any way mean it should be denied to other people. We also need to define “most,” that’s a bad phrasing of the question IMO. For instance, we banned in cases of rape or incest. I’m sure plenty of people who are otherwise against it make a provision for that, but the question makes no distinction about some of those more divisive situations.

The GOP controls this state in a wildly disproportionate way. They passed it because of that, not because of a possible slim majority. We’d have legalized weed if that’s all it took.


So you’re saying diversity of opinions is a threat to diversity?


Unlimited tolerance must not be extended to the aggressively intolerant, because this will destroy the unlimited tolerance. Central philosophical principle of free society. Karl Popper.


Popper was specifically referring to levels of intolerance that cause people to move from the world of discussion to the world of physical force.

That is, per Popper, tolerating people who will physically harm you as part of the discussion means a discussion cannot be meaningfully held at all.

But people really like to stretch this to whatever edge-case meaning of "intolerant" would be convenient to them at the time...


Well, the law certainly represents physical force.


Yes. Certain opinions you cannot allow to exist if you want your democracy to continue to function.


Incorrect. There is no such act as "not allowed to exist". Instead, rather than attempt to bury what is spontaneously manifest, we ought lift these bolstered options up and publicly demonstrate to all how they are torn asunder.


Not allowed to exist is an unfortunate formulation. Marvin above said it better.


The people in those states are your fellow citizens. If you don't care about their well being, then you might as well split now. What exactly makes you United besides a cultish devotion to your origin story and a self image of Freedom Loving that hasn't matched reality since your independence.

The freedom for individual states rights you espouse has always been used for almost exclusively civil right violations.

If you don't do something about that, you are complicit. Now you might say there is nothing you CAN do about it.

But that is the problem. That is WHY the rest of the free world looks at you and says "You are not a democracy."

Because you couldn't change this even if you wanted to.

The last time you tried you came close. You had to fight a war over it but you almost got there. Then you fucked it up during the Reconstruction.


Sorry to be overly pedantic but India is many times bigger than California. It's around 40% of the size of the US.


GP is talking about the size of economy, not the area of land.


That's clunky grammar then. It's not trivial to context-switch and even use the same word 'large' for it: "We have a single state (California) that has an economy larger than all of the UK combined and almost as large as India." I think it's ambiguous at best.


I’ve spend four years in IL and consider them among the happiest in my life

The US right now is a fucking shitshow. It’s one bad election away from being yet another gunslinging theocracy hating women and gay people. They’d probably switch sides and bomb Ukraine, without necessarily looking at a map.


People talk about the happy and glory years in the US in contrast to what is happening today as if both aren't borne out of the same root cause.

When you don't have regulations, strong federal oversight, high taxes, or invest in social programs, then you can have a fucking excellent party nearly all of the time.

Until the economy tanks or the American Taliban decides their party involves telling you what you can do with your body.

It's a very immature/libertarian way to run a society. Wonderful when things are good. Horrific when things are bad.

The stakes are higher than individual hedonism now, though. American Prosperity is boiling our atmosphere and by the end of the century, excess American contributions to carbon emissions will have killed more people in the 3rd world than Hitler and Stalin put together. This is not an exaggeration. Hundreds of millions will die in Bangladesh and India from rising seas and heatwaves because Americans wanted the freedom of the house, the picket fence, the 2 F150s, and the 2 hour commute, and Next Day Shipping From Amazon for All The Things.


Well if Trump or his followers get the top seat again, Ukraine, and with it half of Europe is fucked. He was pretty clear about that.

With this, steep decline in US power projection is inevitable, I mean you can't lose half a billion big rich western population almost 100% aligned with your values.


You say that but where are they going to align themselves to? China? The US benefits by having a lack of good competition. Things would have to get extremely bad for the rest of the west to dump the US. Word on the street is that Trump is going to announce a run for 2024 as a way to get ahead of his rivals. He has a reasonable shot at winning barring unforeseen circumstances. You can't dismiss the odds given how incredibly poorly the Democrats have messed up their two years since taking office.

I feel that if given another Trump win, the rest of the west will be forced to remain in another holding pattern for four years and suffer whatever consequences occur hoping that four years later things improve.


On the short term that sounds about right. Still I would guess that if the EU and US relations would go from more of a culture friendship to a strict transactional nature that would have big consequences. The EU would try to be more self sufficient and for one import less from the US. The EU would probably try to find closer relations to countries with semi big military, totally guessing here India, South Korea, Japan, Australia, New Zealand, Turkey. NATO would of course start to look more shaky and the idea of an EU army more of a possibility. Maybe even a new NATO would form without the US?


I could see the EU being more self sufficient. But at the same time it will be difficult given that they have a serious population decline. You need that population to grow the GDP. Furthermore some essential industries seem to be completely abandoned by the EU. (Looking at competitors to the FAANG companies).

>The EU would probably try to find closer relations to countries with semi big military, totally guessing here India, South Korea, Japan, Australia, New Zealand, Turkey.

This is assuming they have the capability to project forces anywhere in the world which given these countries you listed, feels unlikely either now or in the distant future.


I hadn't thought about this, but you are right. Hell, they don't necessarily even have to be immediately targeted attacks bounty hunters. Try to perform attacks in mass to read personal messages/e-mails of people, use filtering to try to find messages of people discussing getting abortions, and then parallel construct a innocent sounding story to use in court. With 10k per success, you really don't need that many hits to start making big money.


I’ll just leave this video here: https://www.tiktok.com/t/ZTRNqQCvF/?k=1


Great.

In the discussion about privacy we use tik tok to get our data, aren’t we?

Great communist china welcome you. Heard of the leaky story of the firm lately and the FCC case.


IIRC they're civil suits as well which have a much lower burden of proof required for judgement.


Also, I personally know many old people who use a device just for managing their finances as they are inexperienced with security and fear their main device might get hacked.

This functionality makes a lot of sense in such a case.


I dare say app functionality will be reduced by cutting policies such as gps, which may prevent legitimate apps from functioning.


Yeah except putting malware on someone's phone is actually illegal, so seems like a pretty bad tradeoff since, ya know, you'd have to mention how you got the data when you sue someone in court.


Police use this sort of tactic (parallel construction) all the time, though: they collect evidence in ways not admissible in court, but use knowledge of that evidence to find new lines of investigation and new evidence that can be admissible in court.

Presumably someone could use malware on someone's phone to know who to target with an abortion-related lawsuit, and then use legal forms of investigation to find evidence to prove that they got an abortion.


The trick of course is that the malware can't be traced back to the police. Otherwise, the parallel construction narrative vanishes, as well as potentially a bunch of previous convictions that were constructed using the same technique -- At least until the conservative supreme court neuters the 4th amendment.

This needs to be the case of course, unless you support law enforcement agencies doing unlawful actions to get convictions.


Isn't the parallel construction narrative that it doesn't matter how you got the information as long as after you get it, you can show a way that you could have gotten it?

Even if the method used was illegal, and found to be illegal in court, the evidence is still admissible iirc?


Ever read Neal Stephenson's Cryptonomicon? The WW2 part shows a team going through elaborate measures to create a plausible way that the allies can find out what the Germans are up to without revealing that they can read all of their messages. They would tell a submarine to surface at a particular location at a particular time and report what they see, for instance, and the sub crew would have no idea why, to produce a plausible explanation of why some German action was discovered.

Parallel construction often means they hide how they got the original information from the court and from the defense.


The neat thing is those parts of Cryptonomicon were largely based on things that actually happened:

https://en.wikipedia.org/wiki/Ultra#Safeguarding_of_sources


No, that’s not how it works at all. You use illegally gained information to find other avenues to get evidence that on the surface look ok.

For example, you use illegally gained access to messages to find out about a meeting at a particular time. Then when the meeting to exchange contraband is happening, “a concerned anonymous citizen” calls in a tip of suspicious behavior and a patrol cop stumbled onto a bust.


The evidence is not admissible. It is considered 'fruit of the poisoned tree'. Parallel construction only works if you can hide the illegal investigation from the court.


…which is why they work really hard to hide it from the court. That’s the point of parallel construction.


> The trick of course is that the malware can't be traced back to the police

Isn't this even more complicated to prove in the case of private bounty hunters, instead of the police?


> Police use this sort of tactic (parallel construction) all the time

It's really disgusting that we allow this.


We don’t. It’s not legal and has to be hidden by the cops.

It’s like saying that it’s disgusting we allow cops to steal seized property. We don’t, but it happens.


Even when it is discovered, there are zero adverse consequences from the wrongdoers, so it's hard to say that we don't tolerate it.


Parallel construction isn’t illegal.


the Court has never ruled on parallel construction. I think it's probably illegal. there was Harding v. United States, but that was a case where someone was accidentally flagged as having an outstanding warrant. intentionally passing illegally acquired tips is probably illegal, the trick is it's impossible to prove and there's no penalty other than getting evidence derived from the tip stricken from the record.


But lying to the court is, so any related testimony would be illegal, making the scheme as a whole illegal.


We allow it by not stopping it.


Yes! I have no doubt this is exactly what’s happening.


It doesn’t happen “all the time”. The term also applies, and is mainly used, to disguise lawful sources, such as undercover agents.

While there is a problem with US police acting unlawfully, it mostly happens in specific situations. At the federal level, they are much better behaved. And the incentive structure just doesn’t make it worthwhile to break the law


What do you think are the odds we see parallel reconstruction via divine inspiration or psychic detectives in the next couple years?


Getting information through an illegal trawl, is an amazingly effective way of working out how to get related information "legally".

Find out from the phone, that they have an appointment at a particular time and place? It's easy to just be there and photograph them, "as part of occasional surveilance" or whatever.



its trivial for well-funded organizations to get around such legal issues when they use something called “parallel construction”

this is when evidence is collected in nefarious and often illegal ways. it is then given to the organization which will weaponize the information. this organization then launders how they acquired the evidence, obscuring the shady way it was originally obtained.

there is no shortage of instances where different groups (including local police) have laundered how evidence was obtained to get around legality requirements for obtaining evidence.. [various links below]

as the above commenter highlights, it’s about to get even more terrifying as incredibly well funded, incredibly authoritarian groups jump into the fray using religion as their excuse.

https://www.eff.org/deeplinks/2013/08/dea-and-nsa-team-intel...

https://www.jurist.org/news/2018/01/hrw-us-authorities-conce...

https://reason.com/2018/01/09/federal-agencies-may-be-regula...

https://www.aclu.org/blog/privacy-technology/surveillance-te...

https://www.techdirt.com/2014/02/03/parallel-construction-re...

https://www.hrw.org/report/2018/01/09/dark-side/secret-origi...

https://www.scmagazine.com/news/security-news/fbi-stingray-n...

https://www.wired.com/story/stingray-secret-surveillance-pro...


You make the bold assumption all courts are fair.


There are LITERALLY abortion bounty hunters in Texas, who earn money by hounding women seeking abortions and turning them in for profit. I cannot believe the state of this country.


Does anyone know how many abortion bounty hunters there are? I imagine a lot of people assume this is rare/hyperbole.



As bad as the law is, these links don't answer the question of how many there are.


What needs to happen to such bounty hunters probably isn't safe to print.


Any way you can name the person who helped with the unclaimed property issue?


I have mixed feelings about this.

Lockdown Mode basically cripples the phone, feature-wise. It's not quite to the point where I'd (even hyperbolically) say "why don't you just get an old dumb phone instead", but still...

The right thing to do would be to redesign the system from the bottom up to actually be secure in the face of vulnerabilities in any of these features that get disabled because they can be dangerous for people. (And maybe Apple is working on this behind the scenes, which will take them years to complete.)

But, agreed: let's not let perfect be the enemy of the good. It's better to have this option than to not have it, even though it likely creates a super restricted user experience that probably isn't particularly pleasant to use.


> Lockdown Mode basically cripples the phone, feature-wise. It's not quite to the point where I'd (even hyperbolically) say "why don't you just get an old dumb phone instead", but still...

The problem is that phones (of the "dumb"/"feature" variety) are running OSes that don't have nearly the security attention or hardware features related to them as iOS devices.

I carry a KaiOS feature phone as my personal phone (when I remember it). Apple pissed me off enough with the CSAM stuff that I wanted to experiment with alternatives, and I've done so. However, I don't pretend KaiOS is particular "hard" against attackers - it's almost certainly not. But neither does it have much of an attack surface. It doesn't even try to render emoji, they're just black rectangles. And neither does it try to, say, render weird old Xerox image formats.

I would trust an iOS device with "most of the complex attack surfaces turned off" far more than I'd trust a KaiOS or stripped Android device. You get all the hardware protections, regular OS updates, a bug bounty program focused on this mode, and the smaller attack surface window of Lockdown.

I'm incredibly excited by it, because it turns off all the stuff I don't want in a phone anyway.

Unfortunately, "crickets on CSAM" is a problem too. If they say they're not going to ship that ill conceived feature, I might move back to iOS. If not, well... I'll probably play with Lockdown mode for a week or two and then go back to the Flip.


If you opt out of/disable iCloud iPhoto Library then CSAM isn't active right? - It applies to iMessages only because iMessages integrates to iPhoto Library.

Again, the CSAM "scandal" was actually an improvement of what the other online photo services do (constantly scan your entire library of photos with no controls in place). Just the improvement involved on-device scanning that folks seem allergic to. But you can opt-out, so still better than KaiOS.


The claim is that if you opt out, it's disabled, yes. However, I object, fundamentally, to the entire concept of using my device to check my content for your legal requirements.

If I store content on your server, yes, absolutely, you can use your resources to check the stuff I've stored for what you define as badness.

But Apple's system is using my device to scan for their definition of badness. If they'd then said, "And this allows us to do iCloud E2EE," well, OK, this is a discussion to have. Except they didn't and haven't. It is, as designed, "I use my device to scan stuff for you, and then you can still scan it."

And as a direct result, the EU is now pushing for "badness scanning" in all sorts of E2EE channels, to include searching for "grooming" in text chats. "But Apple said they could do it! Why can't you do the same thing?" is a valid argument from a politician's point of view.

KaiOS doesn't have anything in the way of photo uploading in the first place.


But the scanning is only applied to photos being stored in the cloud. What difference does it make which piece of metal is doing the actual scanning if the practical result is the same?


Well, putting aside that CSAM isn't active at all at the moment, you're correct it didn't apply to iMessage (sending an image in iMessage couldn't trigger it unless the user saved the image), and that iCloud Photo Library needed to be on.


This is overblown hyperbole, it disables a few features a lot of people have never even heard of.

Disabling the JIT compiler makes the browser a bit slower. It does not cripple it.

Disabling profiles, debugging and MDM? These are not useful unless you’re in ‘enterprise’ or a developer.

Can’t send documents or links through iMessage? Use another service or copy and paste the links you actually want to open.

Really a tiny price to pay for the added security.


I agree, this mode seems like something most people could manage without difficulty. Amusingly, my (non-Apple) system is much more locked down than this and isn't exactly unusuable IMO, although some things might be harder to manage on a phone.

Microsoft did some tests for Edgium's "Super Duper Secure Mode" and found that disabling JIT improves real world performance more often than it makes it worse (and usually makes no difference):

https://microsoftedge.github.io/edgevr/posts/Super-Duper-Sec...

Diabling JIT makes it possible to enable some additional exploit mitigation methods. A follow up article mentioned that few people who tried it noticed a difference:

https://microsoftedge.github.io/edgevr/posts/Introducing-Enh...

"It is worth mentioning that when we originally had this idea, we doubted our Microsoft Edge peers would even consider it. We quietly made changes to our browser without explicitly telling them the specifics and then asked them weeks later to see if they noticed the change. They would always say no, and only then would we inform them that we disabled the JIT. After surprising multiple developers in Microsoft Edge, we got the support needed to try this experiment. One can’t help but wonder what other well established assumptions about users and the web we should reconsider."


Yea I am a developer but when you put it like that I would consider getting a phone for debugging and adding lockdown to my personal phone. I’m not Jeff Bezos nor do I intend to be but at least I would like to support this and see what the experience looks like.


Yup, there is little downside to supporting this concept as it should inevitably move others to adopt similar functionality. It isn’t the perfect solution or likely even the best idea in the room but the biggest player just made a big move in the consumer’s favor. It does coincide with a big privacy push that will keep their market share up so not really benevolence.


Same, I'm planning on running this on a spare iPhone, looking into whether I'd be OK running it on my daily carry phone.

I consider myself "recreationally paranoid", I enjoy locking my stuff down for fun, not because I ever think anyone's gonna burn an NSO zero day to get into my stuff.


"Disabling profiles, debugging and MDM?"

I'm not really going to argue this but, just in case it is interesting for you and others to learn:

I use profiles/configurator on my kids' iphones ...


Note that this doesn’t disable MDM entirely: it disables adding new MDM profiles after its enabled (I’d hope there’s a “do you trust the existing one with your life?” prompt…) which seems like a reasonable compromise for preventing spear phishing to install new profiles.


> The right thing to do would be to redesign the system from the bottom up to actually be secure in the face of vulnerabilities

i understand the impulse to immediately question if this might solve security, but it just won’t. there are some classes of known vulnerabilities which it may mitigate, but at best it would be a temporary security solution.

security is hard.

we also need to remember that we would, with almost 100% certainty reintroduce long forgotten about mitigations that someone silently did years ago but they didn’t make a big deal over. or even mitigations which were made a big deal of, but they were a decade ago therefor long forgotten about.

we have a tendency to think those who built complex systems before us were unenlightened, or lazy, or primitive. this often really isn’t the case.

anyone who has worked on large projects will inevitably learn the hard way that scale adds incredible fractal depths of complexities that we can’t dream of until it slaps us in the face. so we put out that fire, do not-nearly-enough-documenting on why or what caused it so future people might avoid the same mistake, and then we continue running up the hill.

security is hard.

and of course sometimes a from-scratch-rebuild might make sense but we’d be looking at years and years of relearning mistakes which were previously learned and corrected for.

security is hard.


I'm not so sure. They _could_ start from scratch, building the same feature set but paying extra attention to security in every possible aspect of design, do you now have a system guaranteed to be free of malware? Never. We are talking nation state actors here. Doing that would ignore the really great (if merely suggested) admittance that apple is making here: reducing the attack surface is the only way to mitigate the unknown unknowns of software security.


Another reason "from scratch" is just crazy talk: If they start from scratch it means literally redoing all the libraries. The thought of apple redoing parsers for PDF, JPEG, BMP, PNG, XML, etc, doesn't really invoke confidence for me. They probably create more bugs than the working library has. Unless they do it in rust but even then some bugs inevitably remain.


Lol may as well use a dumb phone because to redesign it from ground up and have the same features would end back square one


If the state is after you, even low-level state actors, all it takes is a court order or subpoena to compel any of the parties involved with your phone or data to hand over your data or start collecting it.

If your threat model includes any level of the US government, and that includes women seeking abortions in states where it is illegal, you cannot rely on US-based company's tech to protect you from the law.


There are state actors other than the US Government, along with plenty of non-state actors who are willing to use illegal techniques on occasion, and this does increase people's protection against those actors.

If you're in a developing country and you engage in activism against some questionable project by the state owned mining company, you're probably not going to get the full force of the NSA directed against you. But your country's domestic intelligence agency may be interested, and they probably only have off the shelf spyware to work with.


Pretty sure most things are stored encrypted/delivered encrypted only to be decrypted and rendered on your phone. Meaning Apple/your provider have nothing to give up for the hypothetical US government demand.


To add to the other comment, Apple installed on-device scanning to iOS as far back as version 14.3 (https://pocketnow.com/neuralhash-code-found-in-ios-14-3-appl...). They claim they won't activate it without a court or government order, but these are becoming easier and easier to obtain. Under the Patriot Act, virtually anyone's electronic devices may be searched for any reason. In effect this means that Apple has access to all information on all iOS devices, and the government may access any of these at will.


This is incorrect, iCloud backups are deliberately unencrypted.

https://www.reuters.com/article/us-apple-fbi-icloud-exclusiv...

I haven't heard of any changes to this to-date.


Sure, they’re not E2EE, but stuff like iMessages are E2EE (assuming iCloud backups are turned off so the keys aren’t included in the backup).


“iCloud Data Recovery Service If you forget your password or device passcode, iCloud Data Recovery Service can help you decrypt your data so you can regain access to your photos, notes, documents, device backups, and more. Data types that are protected by end-to-end encryption—such as your Keychain, Messages, Screen Time, and Health data—are not accessible via iCloud Data Recovery Service. Your device passcodes, which only you know, are required to decrypt and access them. Only you can access this information, and only on devices where you're signed in to iCloud.”

https://support.apple.com/en-us/HT202303

That seems pretty clear to me, but maybe it’s misleading?


> There's a huge gap between Fortune 500 executives, government officials, etc. and regular people in terms of the resources available to them to prevent state-sponsored attackers. It doesn't take much these days to go from a nobody to being on somebody's radar.

It's also a question of whether you want that. Anyone can take anti-phishing training, it just takes a lot of time. Want to download a mod for a game? You better have a separate gaming machine with no important data on it and, to be sure, in a separate network. Want to buy a phone? Better drive to a random store, ordering is to dangerous.

Sure, it's easy to get on the radar, but avoiding a state-sponsored hack is also a lot of effort. Fortune 500 executives need to put that effort in and they do have the money to make it happen, but for most people, the problem is not the cost.


A prominent activist was targeted and her iPhone compromised (owned). She ended up in prison/tortured because of it.

Did not look good for Apple. For a company of their means, they had to do something.


This was a very expensive hack. These are not the kind people throw away on an abortion case.

When you use the hack and it's discovered, it'll get patched and you'll need a new one. These cost 6 figures to acquire in the least.



I wonder, why doesn't Apple (and MS, Google, ...) throw all their weight into the ring and lobby for making selling exploits commerically a crime? It should be up there with counterfeighting money or selling nuclear secrets. NSO Group should be on sanction lists. Politicians should be ranting about how dangerous it is that foreign companies and countries can spy on US citizens (instead of what they are usually ranting about).

You could wake up one morning, and every billboard in Washington, every newspaper will have ads for this issue. Every representative would be followed around by lobbyists. And Apple could pay it from their coffee money.

Now, I get why we don't crack down harder on selling exploits. First, intelligence agencies love NOBUS (No one but us) exploits and believe something like this exists. Second, it is convenient because sometimes foreign intelligence agencies are used to spy where domesitc agencies are not allowed to; and third the US could probably do little (officially) against companies, say, in Israel.

But this is totally the kind of issue that you could escalate into a bipartisan national security thing. And it would be an incredible marketing, and security win if Apple could push any stricter legislation in that direction.


"I wonder, why doesn't Apple (and MS, Google, ...) throw all their weight into the ring and lobby for making selling exploits commerically a crime?"

I would be strongly, strongly opposed to this.

It is a clear-cut free speech / first amendment issue.

If you don't believe me, just imagine yourself describing the pseudocode of an exploit to someone over the phone - or sketching out the details of a vulnerability in a short note.

I believe we won't get to this place because we have the first amendment but I would really love to not waste ten years fighting about it ...


I would actually say morally it is clear cut in the opposite direction. Imagine you hack into a company or a government computer and steal secrets. That is clearly illegal.

Now imagine you figure out how to do the hack, do all the preparation, and sell it ready to use to somebody. And they are open about the fact that they are selling it to foreign powers. This should definitely be illegal, too. In the physical world, you also probably shouldn't be able to go around and sell instructions how to break into cars or houses.

> If you don't believe me, just imagine yourself describing the pseudocode of an exploit to someone over the phone - or sketching out the details of a vulnerability in a short note.

I don't see how any of this would be effected. You could still do hacking, security research, you could get bug bounties, report bugs to the vendor, the government, or even disclose them to the public. You just shouldn't be allowed to sell that kind of information to a third party.

There are many laws like that right now. In the case of insider trading, you are not allowed to share certain nonpublic information against some benefit with others.


One of my favorite youtube channels is Lockpicking Lawyer. He shows how to break into all kinds of things by defeating physical security. The videos are "free" but of course he's making money off ad views, sponsors etc like any youtuber.

Should that be illegal?


> Now imagine you figure out how to do the hack, do all the preparation, and sell it ready to use to somebody.

Now imagine you notify the vendor that they have a grave security flaw in their product. They could totally turn you in to the police and the PoC is sufficient to consider you guilty. You won't be able to prove yourself innocent without a long, expensive and life-destroying legal battle.

It would have a massive chilling effect on everything else instead of what you originally intended.


Lol this is a whole lotta faith based on nothing. Sorry bud Aussie laws gonna puck you here. Your Apple device can be backdoored curtosy of aus laws and apples not allowed to inform you it's happened. If you think lockdown mode gonna prevent this your 100% dreaming. Much lulz y'all should just put less data on your phone if your concerned with others knowing that data.


Absolutely the Australian government have put in some questionable (bipartisan) security laws in the last few years, and Apple _may_ comply with a request under them (even though it famously hasn't many times).

However the Australian government attacking you specifically isn't the only problem this solves.


I agree with the rest of your comment, but this

> Even for a company the size of Apple, putting up $10 million to fund organizations that investigate, expose, and prevent highly targeted cyberattacks isn't pocket change.

is kind of funny, as it’s about 1/20000 of their total cash reserves. With 20000 in my savings account, it’d be equivalent to giving 1 dollar to charity. In other words, pocket change :)


It's still ridiculously good by bug bounty standards.

Zero-day buyers are going to have a hard time topping that.


Bounty is $2 million, grant is $10 million.

You could easily get more for selling a zero-day likely this than reporting it to Apple. If you combined the risk this is being turned on is reported back to Apple or remotely detectable, combined with a zero day, it would be a goldmine; cover this and other issues in my comments on the topic:

https://news.ycombinator.com/item?id=32006436


I like money but something tells me targets of such attacks might end up dead, so it’s more about ethical considerations rather than who pays better. The bounty won’t sway everyone but $2m would sway more people than $1m which would be more than $10k


where are the cash reserves documented?


see: https://investor.apple.com/investor-relations/default.aspx

Specifically the 2022 Q2 financial statement(it's a PDF). under "Cash and Cash equivalents" on the 2nd page, you will see: 28,098

That's in millions of dollars(see top of that page for source), so they have 28 Billion USD just laying around.

10M/28098M = 0.0004 so it's 0.04% of their cash.


Thank you


Is there any topic Roe v Wade can't be shoehorned in to?


There is no topic that Americans won't somehow manage to shoehorn unrelated American politics into, no. No matter how little sense it makes.


Only some Americans - namely, the ones who started the culture war in the first place.


> If you're a woman seeking an abortion in a state where it's illegal or severely restricted, you could be the target of malware from your local or state government or law enforcement.

Let's not get in above our heads, here: if the US government wants to know what's on your iPhone, they still have the faculties to retrieve that information. Setting your iPhone in a lockdown mode isn't going to let you escape the purview of government surveillance, and if it did then Apple wouldn't be announcing it today. We're all targets of government malware, and the way they ensure we all keep it installed is simple: they just make Apple and Google write it for them. This pervasive idea that Apple is somehow escaping the jurisdiction of PRISM is pretty hysterical, and it makes me excited for the first Senators to get caught paying for prostitution services with Apple Pay inside Lockdown Mode. The only enemy of "good" in a threat model is the unknown, and Apple makes sure there's plenty of unknown factors in your iPhone.

Edit: For all HN loves to rant about the Halloween Documents, you lot seem awfully unfamiliar with the Snowden leaks...


I kind of want to turn it on and leave it on. I'm assuming since it's a "mode" that I can turn it off when I need to, do what I know is legit, then turn back on again.


The thing with Lockdown Mode is that it shifts the trade-off between functionality and security significantly away from functionality. This is an acceptable side-effect of intentionally disabling attack surface that isn't strictly required to have a useful phone. On the other hand, it also makes most social time wasting stuff not work, which is what the masses mostly use their phone for anyway.

This really is a mode designed for those who really desperately need it, and it really is implemented in a strong enough way to be useful (hardware root of trust, no-drive by changes since it requires a reboot with a wiped key bag cache so you must reauthenticate in order to change it). But all of that for consumer-attainable pricing. It doesn't have to be perfect and I'm sure in due time there will be jailbreak-esque attacks. But until then, this is effectively a very high barrier for an attacker that lacks the resources of a nation state (or a smart but bored teenager in a basement these days).


> On the other hand, it also makes most social time wasting stuff not work, which is what the masses mostly use their phone for anyway.

Got any info/links explaining that? Having only read Apple's webpage, it sounds to me like the major problem is slowed down javascript execution? I certainly didn't;t get the impression it's going to shut down all social media apps/websites?


It also disables some iMessage features, which could be classified under social, but it doesn’t seem like anything major.


> It doesn't have to be perfect and I'm sure in due time there will be jailbreak-esque attacks

No protection is perfect, and this kind of things are always another layer in a defence-in-depth approach. Just like car locks, the idea is that it becomes enough of a hurdle that someone on a fishing expedition will go look elsewhere. Of course it won't be enough for a determined state actor.


I would assume that disabling Lockdown Mode means wiping the phone to factory condition. Otherwise Lockdown Mode is only as secure as whatever PIN or password you use to disable it, which isn't particularly secure at all.


Yes, but if an attacker has physical access and unlimited time, you've probably lost anyway.

What this seems to be focused on are the "remote zero-click/one-click" vulnerabilities we've seen, in which either a message is delivered that never shows up but installs a backdoor hook, or a website can deliver a malware package to a particular user and install the backdoor hook without notifications.

It sounds like it does improve some of the physical security features, which should help reduce attack surface, but I wouldn't trust any bit of consumer electronics against a sustained physical attack by a sufficiently motivated adversary.


Sounds to me like it's targeting all the zero and one click exploits we've heard about over the last few years. Not having SMS/iMessage download and "parse" random files/formats and tightening up Javascript attack surface to not include JIT optimisations would probably have helped Jamal Kashoggi and his friends/contacts.

Even with this, there's not very much you can do against a state level actor who had physical control of your device and you, and a $5 wrench. Even without having you and being prepared to use violence, a sufficiently motivated state actor will probably get into your device anyway - Apple didn't6 cave to a judge when the FBI wanted them to break every iPhone user's security to get into the San Bernadino shooter's phone, but they didn't get to set a precedent there because someone else broke into that phone for the FBI anyway and they dropped that case...


Turning off Lockdown mode restarts the device but does not wipe it.


Might not be as convenient. Probably requires restarting the phone.


If you're in the habit of worrying about persistent malware on your device, "regular restarts" are one of the best things you can do.

Much of the low interaction malware is only persistent in memory, so a reboot will clear it until they get their claws back into you. Depending on what the attack path is, that may take some while - and using those attacks is still somewhat risky. "Having to re-pwn a phone every 6 hours" is a lot more risky to an attacker than "someone who never reboots their phone and never updates it."


Yes. Also, regularly doing a factory reset is another good hygiene habit to have, this will clear the more rare but persistent forms of malware, often brought on board by legitimate software you installed a long time ago but no longer use.


As soon as you enable lockdown mode in iOS 16 Beta 3 it reboots the phone


maybe stupid question, but wasn't apple a part of prism (https://www.theguardian.com/world/2013/jun/06/us-tech-giants...)

are these US companies not legally obligated through some clandestine patriot act style laws to enable backdoors - and to deny the existence of these at any cost.


>$10,000, which is enough to get someone to trick someone into installing malware on a phone.

People have done far worse for far less.


> At the end of the day, this is all good news for user privacy and security going forward.

What can i say ? Good luck then with your "privacy and security going forward". And remember later, when they knock at your door, that it was for your's and (mostly) their security.


On the other hand, if you're a ballot trafficker, this is good news for you and the non-profits and NGOs abetting you.


At the point it puts users at more risk that not, I don’t see this as a step forward; not informing users of the risk of having iCloud enabled is one example.

For more of my take on the topic, see:

https://news.ycombinator.com/item?id=32006436


> putting up $10 million isn't pocket change

10 Million = 0.0027% of Apple's sales in 2021.

Equivalent to an Apple developer who made 300K in 2021 donating 8 dollars.

If this doesn't classify as pocket change, it's quite close.


Apple has a lot of other stuff to spend money on. Pocket change adds up.


Enlightening comparison, though revenue isn't income.

If you went with net income, it would be 0.0105% of Apple's 2021 net income.

Or $31.80 of $300k instead of $8.


$300k is not the developer net income, in the example


Then it’s $200K after taxes. Though now we are discounting the many things Apple can write off and that people worth 8 figures and up can write off and get away with, with a $300K income person or $200K after taxes. It wouldn’t be the same net income regardless.


Other than perhaps taxes, they would be the same. So perhaps it's a $45-$55 equivalent?


The thing is:

- You have people whose security depend on you

- You say this is a top priority and of ultimate importance (people's lives depend on it)

- You make $300K gross income over a year

- And you dedicate $50'ish for an entire year to contribute into solving the issue

It does look like pocket change to me.


Yes, I agree...I just hate to see revenue used out of context. I can sell $5 bills for $4 and rake in the revenue :)


Apple made 25 billion in profit in 2021, so the equivalent of a 300K income donating $1200 dollars.

To stave off tedium, it's still $800 at a 1/3rd tax rate. These numbers aren't pocket change any way you slice it.


Edit: this is all moot since the amount would be $80 or $120. If someone happens to think that’s too much of a stretch to call pocket change for a $300K income, then the rest of my comment still applies.

At $30000 income where someone else is making the money to pay for expenses, it’s like $10. That’s pocket change.

Or the profit someone has after paying for rent (isn’t this equivalent to paying rent for their own stores and office buildings?), and other things that companies write off before they calculate profit or net income, a $300K income, might have equivalent profit be $90K. That’s under $50. Change it to someone making $75K income and you’re at $1.

Neither of my comparisons are properly analogus but neither is yours. Comparing a company that is doing billions and billions in profits with the income of an average upper middle class person is as incorrect as comparing what is considered negligible money between $300K income and full time minimum wage. The latter person likely has no money left over, maybe goes into a bit of debt each year.


Let us not gloss over the fact that in China Apple willingly handed over their HSMs to the CCP granting them full control of Apple devices there, even if it means aiding in Uyghur genocide.

When it comes down to money, or protecting the freedom or privacy of users, they will choose money. In this case the money is in good PR to help them secure more government contracts. They are playing all sides.

I do not feel anyone that needs high freedom, security, and privacy is well served by proprietary walled gardens. Particularly those that only grant holes in the walls to corrupt state actors.

https://www.nytimes.com/2021/05/17/technology/apple-china-ce...


> If you're a woman seeking an abortion in a state where it's illegal or severely restricted, you could be the target of malware from your local or state government or law enforcement. In Texas, you can sue anyone who aids and abets a woman who attempts to get an abortion for $10,000, which is enough to get someone to trick someone into installing malware on a phone.

Can we stop spreading these lies?


What is false about this?


We're ranting around about states doing explicitly illegal things for no reason. These things will never be done. They'd be thrown out by the courts the instant they were attempted.


The disconnect here is that Apple already monopolizes the devices, the service, and the application distribution platform. Now, they're expecting you to be satisfied with them monopolizing the security controls and monitoring on your phone.

We expect so little of our phones with respect to our desktops when we know full well there's no legitimate reason to do so. Particularly now, if you're imagining that one needs security against state level actors.. then the notion that a single vendor is required to simplify the ecosystem and broaden adoption is directly in conflict with this future you have declared we are now in. It's literally the weakest possible model of defense available.

This isn't the perfect being the enemy of the good.. this is Apple monopolizing yet another aspect of the platform for themselves at the cost of true innovation.


> Now, they're expecting you to be satisfied with them monopolizing the security controls and monitoring on your phone.

What is the alternative, though? That each user figures out for themselves how what their security risks are, cobble together various security-focused apps, stays up to date with new developments, etc.?


Yes. You're describing a market that's obviously ripe for innovation.

Otherwise are you suggesting it would actually be impossible for any other company than Apple to do the best job here?

If that isn't the case, and absent that market, then is there any reason to believe Apple is itself currently doing the best job?


Think about how that’s worked out in the desktop security or VPN markets: there’s a long history of outright scams, a bunch of companies which made their software worse (crammed with ads, etc.) or left their users less secure over time, and the remaining products are for most people completely interchangeable.

The average person has no meaningful way to distinguish between any those. They all claim to be great, auditing is expensive and difficult, and most people are going to get recommendations from people they incorrectly think are experts (shoutout to the websites I had to migrate/secure after someone’s “tech guy” picked GoDaddy for the bikini pictures). Even enterprise security software tends to be long on snake oil despite theoretically more knowledgeable buyers & budgets for auditing.

I think there is a solid argument that this space is not a naturally well-functioning market and is probably better with a few regulated players, similar to how we decided that the patent medicine market wasn’t good (and, yes, the regulatory failures are an important cautionary point!). People are literally staking their lives on something which has to be better than some SEO-d rathole.


And yet, we do no such thing when it comes to home and property security, financial security or medical records security. So, why when it comes to a phone which clearly has less overall value than these items, is it suddenly necessary to throw in the towel and allow an unnatural monopoly to form?

You're describing an unregulated market where the FTC and DOJ didn't seem particularly interested in policing. I would suggest that's a bigger reason for the state of the market then thinking it's a natural phenomenon endemic to this particular case.

And finally.. the giant disconnect here is that "you should worry about state level actors" but "you're too unsophisticated to do anything other than beg Apple for help." Mostly, I was trying to point out the absurdity of this position while at the same time taking a dig at Apple for their "cute friendly monopoly" tactics.


Does your phone company let you configure their spam filter? Do your medical providers let you secure their EMR systems? It sure looks like there is precedent for regulating companies to require them to provide secure services.

> Mostly, I was trying to point out the absurdity of this position while at the same time taking a dig at Apple for their "cute friendly monopoly" tactics.

Yes, and you let the desire for a quick jibe lead to oversimplification. The level of access which is needed to implement things like this also allows very powerful attacks. It’s not unsophisticated but realistic to recognize that allowing that level of access would have some benefits but would also reliably produce a large number of victims who trusted the wrong vendor. Reducing the number of parties who have to get it right to keep you secure has a significant benefit, especially if you’re familiar with the long history of companies which were acting in bad faith or compromised.


> And yet, we do no such thing when it comes to home and property security, financial security or medical records security. So, why when it comes to a phone which clearly has less overall value than these items, is it suddenly necessary to throw in the towel and allow an unnatural monopoly to form?

I think that there's a practical reason. For all your examples, the companies operating the solutions can be held to US laws and regulations. But purchasing (or downloading for free!) software from anywhere in the world cannot be regulated effectively (at all?).

So as a consumer, there is base level trust I have in companies providing me home & property security, financial security, and medical records security because they can be constrained by US laws & regulations, such as minimum standards. Not so for random software that I download for free or buy from some overseas (or basement somewhere in the US) location.


It's also a handy way to keep their stranglehold on iOS web browsers, forcing all to use webkit. How exactly they turn off JIT compiling and allow any javascript to run at all, I don't really understand, and I don't know what vulnerabilities they must be aware of in Safari's engine that could lead to unsandboxed code execution (although thinking about it, this seems to prove they're aware of something inherently unsafe there). But if their claim is along the lines that all JIT compilers are vulnerable, that's a strong case for never allowing V8 or any other engine in the app store.


But if their claim is along the lines that all JIT compilers are vulnerable, that's a strong case for never allowing V8 or any other engine in the app store.

I’m okay with this; I’ve always felt that dealing with the security issues of 3rd party rendering engines and JavaScript implementations is a valid reason to not allow them on iOS.

Since Apple is the platform vendor, at the end of the day, if there’s a vulnerability, it’s their responsibility, even if (in a hypothetical future) it’s Google’s or Mozilla’s JIT that allowed the the malware to be installed on a user's device.

Of course, since all browsers on iOS use WebKit and JavaScript Core, they all get Lockdown protection for free.


This lockdown mode means they can support those other browsers in a non-lockdown mode. All they have to do is have lockdown mode disable all non-webkit browsers.


"Silly HN reader, you're just not seeing the big picture." Could you not?

You know what people do when they're targeted by state actors? They don't use computers. And if they have to, they air gap.


> You know what people do when they're targeted by state actors? They don't use computers. And if they have to, they air gap.

That's like saying "men who don't have easy access to condoms just stay abstinent instead". This is what we wish would happen. But empirically, they just shrug and do the insecure thing.

(There was an article posted on HN a few years ago that was from a journalist pointing out this exact thing, from his personal experience. I can't find it though.)


Ok. You’re in the Republic of Somethingistan. You’re alone. All you have is your phone to contact people at home to help you and some money and you need to get out.

You know the state is after you.

So you ignore this, turn off your phone instead, and… what? Now you’re even more alone, can’t get help from friends/family.

This seems like a very reasonable option in some situations.


It's true, NSO Group doesn't exist and none of their exploits have ever worked on anyone.


It seems like there could be a median area between "in the crosshairs of the KGB" and "I need to avoid off-the-shelf exploits in a specific situation."

A great example of this might be visiting a country like China while on business. Straight up going "off the grid" isn't really an option in that scenario.


If you have any security concerns whatsoever, it's ill-advised to bring your primary personal phone in to China, period.

They may compel all kinds of things, such as unlocking it or more.

KISS.


Or Australia. Border agents here can now compel you to hand over your phone and credentials.


This is basically saying "If you have any safety concerns with your motor vehicle, it's safer to just walk to your destination."

That's not always practical.


> A great example of this might be visiting a country like China while on business. Straight up going "off the grid" isn't really an option in that scenario.

Most corporations who know what they are doing (and some who don’t) send their execs with burner devices when traveling to certain countries on business trips.


And what software will that burner or otherwise locked down phone run?

It's not going to be a flip phone, it's going to be a iOS or Android device specially provisioned by the company's IT department for use in environments like these.

You can't get anything done on a flip phone, you can barely operate in China without WeChat/AliPay.

It wouldn't be very difficult to provision an iOS device with limited connectivity to proprietary information while still maintaining necessary operational communication and productivity. The idea here isn't to just flip Lockdown Mode on and pray that all the secret stuff on your phone doesn't get hacked, the idea is to use it as one tool of many to reduce your blast radius.


> And what software will that burner or otherwise locked down phone run?

It’s irrelevant what it runs, the point is it doesn’t have the individual’s personal data and most importantly access to company data.


You realise users who sit on air gapped networks generally have a secondary device that connects to the public network. To you think the Elon airgaps his mobile?*

*maybe he has a team that audit comms for malicious activity and payloads, but not everybody is as well resourced so the point still stands


Someone better let those NGOs hacked by china know right away!


Let’s not let better be the enemy of good either. Better than terrible is still bad and is nowhere near good.

It is frankly ridiculous that anybody should believe Apple when they claim to provide even minimal resistance to well-funded determined attackers. Protecting against well-funded determined attackers has been the holy grail of software security since forever and everybody in software security at least claims to be working toward that. Despite that, the prevailing state of “best-in-class” “best-practices” commercial software security is objectively terrible including Apple circa 1 year ago.

Are we supposed to believe that Apple, despite abject failure over the last few decades until as recently as the last time they announced security updates to the iPhone, has finally this time, for sure, pinky swear its true, jumped from terrible to the holy grail, or even good, because they said so?

No, this is absolute, utter, unequivocal garbage. Their claims are completely unsupported and they should be excoriated for spewing unsubstantiated bullshit that muddies the waters of the actual state of software security and misleads people into believing they are getting a meaningful degree of protection or software security.

If they want to make such claims, they should put their money where there mouth is and, instead of certifying iOS to EAL1+ and AVA_VAN.1 as they currently do, they should certify it in “Lockdown Mode” to EAL6-7 and AVA_VAN.5 which actually does certify protection against “high attack potential” attackers such as large organized crime and state-sponsored attackers. At the very least they could certify it to EAL5 and AVA_VAN.4 which certifies protection against “moderate attack potential” attackers. Until they do that, their claims to protect against state-sponsored attackers are complete unverifiable bullshit.


First off, calm down. This feature came out today. It's not really clear yet how well it will fare. Second, this feature is a step in the direction of Apple accepting that defending against a well-funded attacker is difficult when providing general-purpose software, so this is still a step in the right direction.


It came out today, which means it should be assumed insecure against state sponsored actors until proven otherwise with overwhelming evidence, not we should give it the benefit of the doubt because maybe they really did it the 57th time after 56 total failures.

For that matter, it is not like they could not provide such evidence even though it came out today. It has presumably been in development for some time, so if they did actually provide verifiable protection against state sponsored attackers they could just release their formal proofs of security to that effect and be done with it or at least preliminary certification evidence demonstrating protection against high attack potential attackers as outlined in the international Common Criteria standard via AVA_VAN.5.

iOS is already certified according to Common Criteria as their only advertised security certification, just at the lowest possible level, and it already has a certification for high attack potential attackers, so doing this would be consistent with their existing certification regime and provide clear evidence supporting their claims.

Absent that, I see no independent verifiable evidence of any of their claims, endless precedent to dispute their claims, and not even a token effort to provide even a sliver of objective backing for their claims.

So why should I or anybody else reject the standard wisdom of “you are screwed if state sponsored attackers are interested in you and there is no product that can help you” and instead believe Apple’s marketing that they can?


It came out today, which means it should be assumed insecure against state sponsored actors…

What was announced today is the first version of a feature in a beta version of an operating system that won’t be released for at least 2 months from now. Chill.

I’m sure there will be the requisite white paper, statements from security experts, verification from industry groups, presentations at security conferences, etc.

In the meanwhile, from what little we know now, it seems to be heading in the right direction.


Okay, point me to a single white paper or certification that can demonstrably, reliably differentiate between products that can protect against state-sponsored attackers and products that can not, and show any Apple product that has been verified against that standard to protect against state-sponsored attackers.

I will start by pointing out such a standard, the Common Criteria, which can reliably reject systems that can not protect against state-sponsored attackers as systems such as Windows have never been able to achieve even protection against moderately skilled attackers, which is a fair assessment. Under that standard, which iOS and all other Apple products are already certified to, Apple has never once been able to achieve protection against moderately skilled attackers let alone highly skilled attackers. In fact, that very same standard declares from empirical evidence gathered over decades that it is infeasible to retrofit a system that can not protect against moderately skilled attackers to ever become able to protect against moderately skilled attackers or above.

For reference, one way of demonstrating protection against highly skilled attackers according to the Common Criteria is to subject the systems to a penetration test by the NSA with full access to source code with successful penetration constituting a failure. That is a reference point for what protecting against a state-sponsored actor looks like according to the standard.


Security is not black-and-white, it's shades of gray. This feature aims to make exploitation harder. Formal proofs and certifications are nice but what I just said remains true even in the face of such things. iOS is regularly tested in the real world against highly resourceful attackers, and the results there are far more indicative of how well its security fares than anything else could be.


> it should be assumed insecure against state sponsored actors until proven otherwise with overwhelming evidence

Everything is always insecure. Like in toxicology, it's a matter of degree.

If you're really facing state-sponsored actors, you shouldn't be using an iPhone. You probably shouldn't be using a mobile phone. But that isn't a tradeoff most people are willing to make.

Lockdown Mode existing is unequivocally better than it not. Those who would have air gapped aren't going to be tricked into using Lockdown Mode instead. Instead, those who would have reluctantly used their iPhones in normal mode and e.g. turned off location tracking will now be better protected.


Yes, and like in toxicology it matters very little if instead of injecting a spoonful of botulism you instead inject a spoonful of less dangerous anthrax. Matters of degree still care about orders of magnitude and bright lines defining fitness for purpose.

Lockdown Mode is being advertised as protecting against state-sponsored actors: “Lockdown Mode offers an extreme, optional level of security for the very few users who, because of who they are or what they do, may be personally targeted by some of the most sophisticated digital threats, such as those from NSO Group”. They are attempting to convince people who would otherwise air gap to avoid being killed that their systems are perfectly adequate. Their systems are on the order of 100x worse than what it necessary to protect against state-sponsored actors. It is not acceptable to attempt to conflate the two just because everything is a shade of gray; one is off-white and the other is off-black, they are not even remotely similar.

Apple’s advertising of Lockdown Mode is unequivocally worse for the stated use case than not having it at all since then at the very least people at risk would not be mislead into thinking Apple can protect them. If they want to change their advertising to clearly indicate that it should not be used if you are at risk of state-sponsored attacks and that there is no independent verification for any of their claims, then I would agree with you, but they are not doing that. Until they do, they should be censured for making such irresponsible and reckless claims that mislead at-risk individuals from taking proper precautions.


Especially as Apple is often the "well-funded attacker".


Citation, please?


I am so excited about this news. I understand that some people are pessimistic, and view it as a "giving up" on complete security against nation-states. I think that's the wrong way to analyze the situation.

The dream I have is someone making a phone that is purpose-built to be secure against state actors. Unfortunately, this makes very little economic sense, and probably won't happen (maybe if some rich person started a foundation or something?). The phone would need to have pretty restricted functionality and would not be generally appealing to mass market consumers.

As it stands, securing a mass market modern smartphone, even from just remote attacks, is just intractable. We should not bury our heads in the sand and wishfully think that if they just spend a little more money, close a few more bugs, and make the sandboxing a little better, somehow iOS 16 or Android 13 will finally be completely secure against state actors. The set of features being shipped will grow fast enough that security mitigations will not someday 'catch up'.

This is the next best thing! The more we can give users the freedom to lock down their devices, the more the vision of an actual solution comes into view. This is the first step towards perhaps our only hope of solving this someday - applying formal methods and lots of public scrutiny to a small 'trusted code base', and finally telling NSO group to fuck off.

Even this dream may not pan out, but at least we can have hope.


I would suspect any phone designed to resist a state-level actor, that is made available to me (a regular citizen) would 100% be a honeypot for a state level actor.


In fact, several phones which have been advertised as such have been honeypots from state level actors.


Which ones? Not challenging you, just curious.



That's crazy! Straight out of the Wire.


Australian Federal Police did it as well: https://www.theguardian.com/australia-news/2021/sep/11/insid...


anyone big like samsung, lg, or apple? I'd love to see those articles and teardowns.


Security as a service is going to be a honeypot 100% of the time.


This comment feels disingenuous to me, but maybe I'm misinterpreting. Security features are always a service but there are real apps that provide real security. Signal and Matrix provide real encryption for communication. There's even mainstream products that do, like iMessage or Gmail, though these tend to be more selective about what is secure and what isn't (typically through walled gardens). Apple and Google both use federated learning, which is at least a step better than your typically data "anonymization." I agree that there's not enough push for serious security, especially as a default, but I also am not pessimistic on the subject either.


Signal wants your PSTN ID = real world ID, wants contacts from your phonebook which on Google phones generally means already cloudified, and is itself distributed through Google Play. Further, IIRC it's US-based so subject to acts of intervention from on high. I would be strongly suspicious of any metadata security claims, even if it nominally provides message or session-level encryption. Metadata is bad news.


> IIRC it's US-based so subject to acts of intervention from on high.

Sure, and they have been open about what information they give. If you're talking about being forced to introduce compromised code, well I'm not aware of the US government being able to force a company to do that. Signal has said before they'll shut down and then move if this is a requirement and on top of that[1], the code is open sourced and constantly scrutinized by the security community. So sounds like a pretty difficult thing to pull off.

I don't think handing your phone number to Signal is as big of a security issue as you're making it out to be.

[0] https://signal.org/bigbrother/

[1] https://www.wired.com/story/signal-earn-it-ransomware-securi...


I have a ton of concerns with Signal. They started collecting and storing user data in the cloud while being deceptive/unclear about it in their communications leading to a ton of confusion with users. In fact they're now storing exactly the same data that they've bragged about not being able to turn over since at that time they weren't keeping it. Pretty much as soon as it was clear Signal was going to start keeping user data, users started with objections and asking for a way to opt out of the data collection and bringing up security concerns but those objections were ignored.

To this day they're violating their own privacy policy because after they started storing user data in the cloud they never bothered to update the policy.

Currently it states: "Signal is designed to never collect or store any sensitive information." while in practice they store your name, your photo, your phone number, and a list of everyone you're in contact with which is pretty damn sensitive, especially if you're an activist or a whistleblower.

I've stopped using/recommending it. To this day I run into posts where people think Signal isn't collecting any user data. I hope every user who has to learn what signal is really collecting from some random internet comment thinks long and hard about what that says about how transparent and trustworthy signal is.


I recommend session now.

https://getsession.org/

It doesn't require creating an account and giving up your phone number.

They use the same signal protocol with different trade off in terms of security and privacy[0]

My only concern is they are based in Australia.

0] https://getsession.org/session-protocol-technical-informatio...


I'll give Session a look! Right now I'm using silence for unsecured texting and Jami for secure communication, but both lack polish and going from signal to silence was rough. It really needs a search function.


> They started collecting and storing user data in the cloud

> they're now storing exactly the same data that they've bragged about not being able to turn over

Can you provide me a source on this? This is the first time I've heard of this.


> This is the first time I've heard of this.

Doesn't surprise me. You're my new example of folks still unaware.

My old one was here (none of the answers this guy got tell the truth of the situation): https://old.reddit.com/r/signal/comments/q5tlg1/what_info_do...

Here's an early discussion on the user forum: https://community.signalusers.org/t/proper-secure-value-secu...

It was a total mess with tons of posts there and on the subreddit too. Here's an example: https://old.reddit.com/r/signal/comments/htmzrr/psa_disablin...

Anyone not following all the drama at the time wouldn't have a clue, and a bunch of people who did still came away with incorrect information anyway because Signal didn't make it clear at all what they were doing and they've gone out of their way to avoid answering direct questions in a clear way ever since, instead keeping the myth that they don't collect user data alive.

There's no reason they couldn't have provided a simple opt out for the data collection and avoided the issue entirely and the fact that they wouldn't do that was red flag enough, but the mess of confusion their communications caused and their refusal to update their privacy policy should be all the evidence we need that they're not to be trusted. To be fair to the folks at Signal, they may actually be trying to communicate that very message to their users as loudly as they're legally able to.

Additional links you might not enjoy:

https://community.signalusers.org/t/dont-want-pin-dont-want-...

https://community.signalusers.org/t/can-signal-please-update...

https://community.signalusers.org/t/wiki-faq-signal-pin-svr-...

https://community.signalusers.org/t/sgx-cacheout-sgaxe-attac...


The whole cloud data collection, and the fact that their privacy policy is now veritably incorrect for over 2 years now certainly makes it plausible there's more they're keeping away from us.


Sure. Aside from the Google phones upload contacts to cloud issue, and the encouraging contacts to be added thing, there are two clear problems: both metadata.

(1) It's the network of phone numbers - who knows who, when they added, that starts to draw a picture.

(2) If they have any infrastructure at all - update checks, contact additions, whatever, that is going to phone home or be polled or contacted whatsoever, particularly that which can facilitate a network response (generate network traffic when an ID is added) then the app effectively acts as an element that can be used for identity verification even if all traffic is encrypted. This is not a small issue.

These issues are not unique to Signal, but they should not be swept under the rug. FWIW I do not claim to have read or audited their code, I just feel the use of PSTN IDs (== highly available link to personal identification) is a total farce which introduces huge risk for nearly no benefit to users and is fundamentally incompatible with their nominal public stated goals (again haven't read the official text) of end user security if that security is supposed to be best-effort.


> Sure. Aside from the Google phones upload contacts to cloud issue

You can add contacts through Signal that aren't synced with Google. I've just understood this process as a way to initiate the social graph. You can just not give Signal access and start from scratch, but I don't think that accomplishes much.

Also, as far as I'm aware, Signal doesn't actually know your phone number.


The thing is, some percentage of your contacts will accidentally or knowingly grant permission for their contacts to go to Google. So by linking to that infrastructure Signal is making this problem worse, whether or not they actually facilitate the spying themselves.


I assume you're an FBI agent trying to encourage people to install your real cooler encrypted app that's not on the store and only available via sideloading.

https://nymag.com/intelligencer/2021/06/fbi-snooped-on-crimi...


Heh, nice one. Not that it's my area, but in case the above was not decodable as sarcasm to other readers, following the evidence-based / defense-in-depth strategies I'd personally recommend not using phones at all (far too little control in general) and instead recommend seeking out auditable (open source) software on actual machines you have a hope to control for secure communications. It's a deep rabbit hole with diminishing returns, though.


It's definitely tin-foil-hat level. Obviously if you're a spy you're gonna have to have next level stuff, most of us aren't Jason Bourne, even we'd like to think we are.


There are a lot of bad actors in the security space. DDG, for example. Companies like perimeter 81 I don't trust based solely on the fact that Israel regularly and frequently acts nefariously. Bitlocker replaces good drive encryption you control with something that can be unlocked by authorities. Plenty of PRISM compromised companies offer security...


sms and email are insecure-by-default protocols. Gmail/imessage extend them which necessarily will create vendor-lock in when the extension relies on some centralized service, the extensions are private, and the implementations are closed source.

Matrix fixes this, but only in the sense that they replace the whole protocol without reverse compatibility.


This comment is especially true for the majority of the VPN companies plaguing YouTube ads/sponsorships right now. It's interesting they've all pivoted more towards "get netflix content from any country" than security, and also interesting that none of the streaming services have gone after them for doing so.


Gotta trust somebody at some point? Otherwise you have to live off the grid in the woods eating squirrels and mushrooms



And yet we got TOR because it was required for National Security.


TOR is no magic bullet


No, but it was a layer of security required by DoD so it was created and continues to exist.

The same need for modern communications (phones) exists.


IMO Bunnie has the technical skills and the reputation to pull it off though.

I think it has about zero chance of withstanding physical attacks, which is important to me in a phone, but it's a nice effort.


Most of the people in charge, only care about what state the "bad"/"good" actors are from, so preferably, "our guys" should be able to do everything, and "theirs" nothing.


Bunnie Huang is working on Betrusted [1], a communications device that is designed to be secure from state actors. The first step is Precursor (about: [2], purchase:[3]) the hardware and OS that will be the platform for the communications device.

It's designed to be secure even though it communicates via insecure wifi, for instance via tethering or at home. The CPU and most peripherals are in an FPGA with an auditable bitstream to program the device to ensure there are no back doors. Hardware and software are all open source. It has anti-tamper capability.

It looks well-thought-out.

1. https://betrusted.io/

2. https://www.bunniestudios.com/blog/?p=5921

3. https://www.crowdsupply.com/sutajio-kosagi/precursor


Unless you design the FPGA inhouse and make it in your own Fab how would you know it's secure? Taiwan and Korea owe the US a lot of favors...


It's not rigorously provable, but to a large extent a "backdoored FPGA" is complete nonsense and not even worth considering.

The manufacturer/adversary knows nothing about your core design or where you'll place logic. Synthesis tools literally randomize routing and placement on each run as a natural consequence of routing being strongly NP. Further, once you add in the fact that FPGAs are often fairly high volume goods since the same chip is sold to thousands of different companies, it makes even less sense since now you have to have a backdoor that activates only on specific random designs but not any other design in regular industry use since an activation would lead to incorrect circuit behavior there. You'd also need this behavior to not show up under automated verification (you're running a verification suite against your chips, right??) which is nearing on science fiction. While, I guess you could do something like this, it'd be wildly impractical in every sense of the word.


FPGAs just have a much lower essential complexity.

Adding one undocumented latch is enough to undermine an ASIC CPU. To do that to an FPGA, you'd have to know where the layout engine is putting the circuit you intend to pwn, and good luck with that staying still under any revision.

If this did become a problem, a technique analogous to memory randomization could be employed to make any given kernel unique from the hardware's perspective.


You can’t of course know, but modifying the mask of a modern chip (millions of dollars by itself), slipping those mask(s) (you need many, one per layer of material) into production to target a subset of devices, in a way that lets you inject faults and lets you own the design the FPGA is emulating, is nuclear power level. And would imagine they would not risk it very often if at all due to the fallout it could cause.

A microcontroller on 130nm? Different story probably. Still crazy hard


I remember this talk at CCC two years ago. Has this device move forward? I haven't heard anything since that CCC talk.


I want deniability. After watching the videos from Ukraine of Russians pulling out citizens from cars forcing them to unlock their phone with guns to their heads -- I want a way to hand someone a phone, unlock it, and STILL be protected. I want my private things in a volume with deniability. Trucrypt was close.


I would pay a good premium for an iPhone with a distress code that unlocks the phone into an environment with some fake but plausible contents. Bonus points if it optionally wipes the real user partition upon entering the code.


That sort of exists, but only sort of. If you press the lock button on the side of the iPhone five consecutive times, it will then require your passcode to unlock (hopefully a high entropy passphrase), and will disable biometric authentication until unlocked with a passcode. You can set the phone to wipe after 10 failed attempts to unlock.

You can also say "Hey Siri, whose phone is this?" and your phone will lock down the same way as described above.

Of course, this doesn't protect from the $5 wrench attack, but plausible deniability only goes so far as well in a targeted attack. At least, depending on your local laws, law enforcement may not be able to compel you to provide your passphrase, but they can easily force you to use your biometric data, so this protects against that.


Buy a second phone, that's what some do in China due to an "anti-fraud" app with slightly softer enforcement


>>The dream I have is someone making a phone that is purpose-built to be secure against state actors

I just don't see how anyone could build such a thing. State level actors have the tools necessary to force you or your company to build in any backdoor they want, and prevent you from ever talking about it to anyone. US certainly does, and could just force apple to add a backdoor to this lockdown mode and apple could never even hint at its existence under legal threat.


Not just the US, so do the EU, any five eyes country, China, Korea, Taiwan. The US doesn't have a hegemony on backdoors so lets always remember that and not exclude others or act like it's an island of corruption in a world of benevolent state actors.


I don't think Korea or Australia have the power to force Apple to build backdoors into their products. Maybe they'd get to use the US one if they asked nicely.


Australian law requires that Apple enable a "backdoor" when issued a Technical Capability Notice.

I don't know one has been issued. But Apple still sells devices in Australia, so I'm assuming it has complied when it was asked.

See https://www.techtarget.com/searchsecurity/definition/Austral... for an overview.


TCNs can only be issued to CSPs, Apple isn't one.

There are enough issues with the spying bills introduced in the last few years without creating ones that don't exist.


Unless it was some kind of false flag to encourage trust, the US government asked less than nicely via the FBI and Apple told them to pound sand.


Or they could just add an implant at the factory.

Why anyone allows their devices to be manufactured overseas is beyond me.


Looking forwards to when Apple manufactures all iPhones in Sweden. Or did you mean the US, which remains stubbornly overseas and scary to the majority of the world’s population?


I meant ”not abroad”.


I don't recall getting a vote. Do you even know of a single device made in a relatively "benevolent" state actor country? I would love to know. I would love it if there was a provably secure device manufactured in some remote Pacific island that has never projected itself as a malevolent international threat like 100% of the first world countries have.


We recently discovered one of our biggest geo-political enemies manufactures all our medicines. So that's crazy.


That's because you are unwilling to buy a $1500 phone when there is the same phone for $800.


Compare the price of the Librem 5 (1299) vs. the Librem 5 USA (1999).

The former is assembled in China, the latter in the US.


Might want to update those prices. Highest priced iPhone is $1,600.


I don't think the intent was to capture the price of the most-expensive model.


>Why anyone allows their devices to be manufactured overseas is beyond me

$$$$


Realistically you cannot win against a resourceful adversary every time. But merely painting the situation through the lens of premature surrender is also a disservice.

It will be interesting to see what third-party researchers discover about these new protections. Might remember something about Apple rewriting format parsers for iMessage in memory-safe language with sandboxing as Blastdoor and it was discovered there was still plenty of attack-surface in the unprotected parsers.


It might just be better to not rely on a phone, rather than rely on something achieving perfect security against the most malicious and capable of actors.

If I was really concerned about targeted cyber attacks against me, I think that I would exclusively use computers that I would buy from random people on Craigslist, take the hard drives out and only boot with live CDs using ram disks, and only connect via random public Wi-Fi locations.


If I was really concerned about targeted cyber attacks against me, I think that I would exclusively use computers that I would buy from random people on Craigslist, take the hard drives out and only boot with live CDs using ram disks, and only connect via random public Wi-Fi locations.

Excellent precautions if you live and work in average middle-class suburbia and never go anywhere or do anything dangerous, controversial, or politically unpopular.

Lockdown Mode is not for you. It's for other people with different lives.


My point is lockdown mode won't be good enough. Which is why there is still a big bounty for it. And those wouldn't be excellent precautions if you weren't doing anything dangerous, because they would be a huge burden over just operating normally above board.

How exactly does this method stop working in cities? You could have provided some content instead of a weirdly vitriolic dismissal.


The parent was simply explaining that lockdown is not intended for a person who buys computers from Craigslist in order to enforce security.

Your mitigation is not a mitigation against being singly targeted. There are so many attack vectors in a computer outside of the boot disk. The computers sold on Craigslist should not be considered secure, since there is no level of trust in the supply chain or the state of the hardware.

For ex: If you are being directly targeted, a nation-state can purchase the computers from your local Craigslist, rewrite their bios, and list them for you to purchase. Then flood Craigslist with 100 other compromised machines.


Sure, they can do that. If they know that what you're actually doing. And you just do the same thing stupidly on repeat in the same area.

All of that certainly sounds much more involved than sending a zero-day zero-click iMessage to the well known phone number of a dissident.


I was explaining why your use case of purchasing computers from craigslist does not secure against nation-state targeted attacks. Now you are changing the conversation and saying there are other ways to attack. Of course there are many other attack vectors. I mentioned that, however the conversation was about the true level of security provided by your mitigation.


I'm not changing the conversation, I'm pointing out the simple, currently-used-against-dissident attacks that are not possible if there isn't a clear connection between dissident and device. It certainly provides pretty good protection compared to having an always connected device with a unique ID carried on you at all times. Security is oftentimes about making reasonable tradeoffs based on your risk levels.

And I think you may be overestimating even the resources and capabilities of nations.

Let's say you lived in Philadelphia. You could drive down to Baltimore or up to NYC in 90 minutes. Within that range, there are literally over 10,000 individuals selling 1 or more laptops on craigslist and other sites that I did a cursory search over. And that's not even counting all of the small mom and pop shops that are selling laptops, as well as the big box stores.

How should the adversary state figure out which of those people you're going to purchase from? Should they purchase literally every laptop in the region? Okay then...what about when people start selling more laptops they had in storage because the market is red hot?

What do they even do when they have the laptops? Do they have exploits for every BIOS for every type of laptop for the past 15 years? How do they sell the laptop to me? Do they have their agents sell them? Do they have hundreds of agents who are deep undercover in America, who could lure me in?

I just don't see "buy every laptop in a region, exploit it, and resell it, hope your target picks one up" as a viable strategy, even for the wealthiest of nations, assuming you need to do it discreetly.


This is a fantasy that could only from someone who doesn't actually need it. The people who actually need Lockdown Mode-- dissidents, organizers, journalists, etc.-- also actually need to communicate with normal people, and that means having a phone. If you're so unimportant that you can get away with your proposed computing scheme, you're not going to be the recipient of targeted cyber-attacks.


Well, I don't need it, but the people who do need it usually don't have much of a clue about infosec or cyber security.

What means of communication are available to you via a phone but not via an internet connected computer?

There isn't even anything intrinsically wrong with a cell phone, other than the fact that it encourages you to carry it everywhere and merge all communications with everyone onto a single device that is default connected to the internet.


>>"...a "giving up" on complete security against nation-states...

DEFINE:

State Actors: [0]

As one who is acting on "behalf" of a government.........

What if said government was actually an arm of the corporate entities as the state ACTING at their behest?

Crazy, I know.

[0] https://en.wikipedia.org/wiki/State_actor


The dream I have is that they do not sack us with taxes that later on they use to violate our rights.

First thing is to remove a lot of the economic power and legislative power states have and hardening security in devices is also good news. But the problem is also that they have so much money and power that they can misspend money to target people and violate their rights because yes.


The potential a phone like that would have if you explained people how states can and do put their nose into their lives is quite big IMHO. It is just that people have no idea of how much they can take from your info through a phone.


In general, I'm much more concerned with private actors than state actors. I'm aware of multiple ways in which companies use information to try to extract money from me, and they actively make my life worse in the attempt.

I have a much harder time thinking about how giving states access to my information has been harmful for me. I can think of potential harms, if the state started doing religious or ethnic persecution(not trying to diminish the chance of this, but not a problem today) so I'm aware of potential threats. But other than that... What exactly should I be worried about?


The problem is what you say: political and religious prosecution. Not today, but hey, who knows when, given a situation. So it is better they cannot have our data, right? I mean, that is the safe part of the fence.

I am concerned about any actor, as well, but take into account that a state has a huge amount of resources and if they are motivated enough can make your life worse than almost any private actor. It has happened in history, this is not something fictional.

The reach at which a private actor can harm you is much more limited in the general case IMHO.


The problem 90% of cases is the user himself. Advanced attacks such as spyware-for-hire with zero-days and stuff only affect a minority of users. For the fast majority, the vulnerabilities are much simpler: password reuse/carelessness, malware on other devices (laptop, etc) that also has access to their data, willingly sharing too much information, etc.

You don't need a special phone or hardened OS to defend against that, and users vulnerable to this will remain just as vulnerable regardless of how much hardening there is.


Most people couldn’t grasp the important ramifications even if you walked them through it from first principles. I’m not sure I can despite being very interested in information entropy my whole life.

A lot of people really don’t understand much at all about anything that they don’t constantly see and touch their whole lives. A lot of people truly just live in the moment constantly and use their higher order thinking for social navigation and sex.


>The dream I have is someone making a phone that is purpose-built to be secure against state actors

Here you go: https://puri.sm/products/librem-5.

FAQ: https://source.puri.sm/Librem5/community-wiki/-/wikis/Freque....


I feel like the closest you can come to the dream of a phone that is secure against state actors today would be a google pixel phone running graphene os.


With this announcement, Apple are saying "we will protect you from state actors", which is a role usually performed by states. Apple is saying "we operate at the same level as nation states; we are a nation-state level entity operating in the "digital world": It's a flag-raise.

It's the first such flag-raise I've seen. Security researchers talk about protections from state actors all the time, and there are tools which support that... but this is the first public announcement, and tool, from a corporation with more spare, unrestricted capital than many countries. It comes at a time when multiple nation states are competing for energy and food security; and Apple are throwing up a flag for a security-security fight (or maybe data-security). This is not just handy tech, it's full-on cultural zeitgeist stuff. Amazing.


There's a bit of a journey from "protecting you against government hackers and spooks" to full-on sovereign states; and there's a lot of things that a country's government funds that Apple couldn't even begin to take on[0]. Physical security and military operations are a hell of a different field from that of locking down computers.

Furthermore this isn't the first of its kind; Google has been alerting high-risk Gmail users about state-sponsored hacking for about a decade now. Microsoft probably does something similar. Apple is comparatively late to the party on this. On the offensive side you have the zero-day vendors that broker exploits between hackers and the government.

A better explanation is that Apple isn't supplanting the US government. It's supplanting Halliburton. As more and more people and things go online, hacking and doxxing them is becoming more militarily valuable than just arresting someone or firing a missile. After all, physical attacks risk counterattacks and escalation, but Internet attacks are relatively cheap, not really treated as an attack by many sovereign states, and, most importantly, difficult to attribute.

[0] Call me when Apple black-bags Louis Rossman for illegally repairing MacBooks, or threatens literal nuclear war - like, with uranium bombs and radioactive fallout - on the EU for breaking the App Store business model.


Furthermore this isn't the first of its kind; Google has been alerting high-risk Gmail users about state-sponsored hacking for about a decade now. Microsoft probably does something similar.

It’s great that Google alerted Gmail users, but then what?

“We believe you may be a target of a state-sponsored attacker; have a nice day.”

Beyond just telling you, Apple is providing some tools to do something about it.


I not a big supporter of Google in general, but they don't just notify you. They offer to enrol you in their Advanced Protection Program: https://support.google.com/a/answer/9378686?hl=en


Google advanced protection mode has been available for a while.

The threat models are different because the companies provide different services (spear phishing defenses from the web services company, hardware defences from the hardware provider), but still.


Apple doesn’t have to literally have an army and a bureacracy to rival a government. They just need enough flex. And they do!


I've always thought that the companies coded the "zero day exploits" in, and then sold them for profit.


I'm not saying it never happens, and I don't want to assume anything about your background, but I think most people who work in software would agree there's no need. Plenty of problems get in on their own.


yep if that were your goal it would be way more cost effective to get a zero day from just not trying that hard with security practices. Not having any security knowledge on the team. Not patching/upgrading dependencies with security bugs.


And then you have plausible deniability! I think we're hitting on a new business model here...


RSA weaker key set to default perhaps?


It doesn't make sense from numbers perspective, there's simply not that much potential for profit there. In general, the sale price of a zero-day or ten in some popular product is tiny compared to, for example, the marketing budget of that product.

That money is significant from the perspective of a particular employee (i.e. if they personally would get the money) or for a specialized consulting company, but it's a drop in the ocean for the large companies actually making the products. So we should expect some backdoors intentionally placed by rogue employees (either for financial motivation or at the behest of some government) but not knowingly placed by the organizations - unless in cooperation with their host government, not for financial reasons.


Unlikely.

I do suspect the number of 0days which were deliberately added by plants from Five Eyes or elsewhere is not zero.


>Apple is saying "we operate at the same level as nation states; we are a nation-state level entity operating in the "digital world"

Making mountains out of molehills.

I'm pretty sure they are saying that they will "offer specialized additional protection to users who may be at risk of highly targeted cyberattacks from private companies developing state-sponsored mercenary spyware".

There is a looooong list of things which nation states can do which Apple cannot, some examples of that are in other comments in this thread.

>but this is the first public announcement, and tool, from a corporation with more spare, unrestricted capital than many countries.

Google & Microsoft have both had fairly long-standing tools and procedures (which were publicly announced) to both alert users and aid users against nation state attacks.


Google's Advanced Protection program is the same: https://landing.google.com/advancedprotection/


Apple also started alerting people being targeted by state actors last year [1].

[1]: “About Apple threat notifications and protecting against state-sponsored attacks” https://support.apple.com/en-us/HT212960


The NSO Group, whom Apple specifically cites as an opponent that inspired this work, is a private corporation. They sell to governments, but so does Apple.

The relationship between state and private industry has never been binary and has always had features like this. I don't think this is a "Jennifer Government" type scenario.


NSO is pseudo-private, kind of like Raytheon.


Raytheon (NYSE:RTX) is a publicly traded corporation.


My point is not public vs private structure, it’s that Raytheon is essentially a government entity at this point


At the same time, if that state actor happens to be China, Apple will just give the government access to your iCloud data. Not all state actors are equally within Apple's striking range.


It is worth mentioning that things like National Security Letters exist in the US. It is also the US who made Apple back off of encrypting iCloud backups E2E.

I wish we were more willing to cite our own government(s) as the bad actors here, rather than pretending that we have to reach for China/Russia/North Korea to find the kind of behavior Apple is attempting to protect its users against here.


Not to mention the CLOUD (Clarifying Lawful Overseas Use of Data) Act, which was enacted following a case in 2014 where Microsoft refused to hand over emails stored in the EU (an Irish data centre, in that case) on foot of a domestic US warrant.

The CLOUD Act expressly brings data stored by US-based companies anywhere in the world under the purview of US warrants and subpoenas.

https://en.wikipedia.org/wiki/CLOUD_Act


This has always been the law. Common law courts have been issuing court orders that require you to take actions in foreign countries, even in violation of foreign law, for as long as it's been a legal question. The CLOUD Act actually introduced some additional safeguards and allows judges to consider the seriousness of the foreign law violation and weigh it against the importance of the court getting access to the foreign-stored data.

You unfortunately need something like this because otherwise people will just hide documents, money, stolen property, etc. in foreign countries out of reach of US courts, even if they are US persons and corporations.

It isn't just pro-government. Imagine you are a criminal defendant and there is evidence proving your innocence in a foreign server controlled by an American person or company. This rule makes sure you can legally compel that entity to go get the data, the laws of that other country be damned, so you can present your defense.


While extra-territoriality is not a new concept, it’s absolutely false to say that the CLOUD Act didn’t grant sweeping new powers to US courts. That’s a truly absurd claim that makes me question whether you’re commenting in good faith?


It was passed because in the Microsoft v. US case, the Supreme Court was expected to affirm the long-standing law on this: that in response to a U.S. court order, Microsoft had to hand over user data from Irish servers, Irish law be damned.

Such a blunt rule was considered a little too harsh, and a potential source of international problems, so Congress passed a law softening the rule and allowing judges more discretion in considering the burdens of complying with the order. The law had the effect of making the Supreme Court case moot.

Sorry that the truth is more nuanced than you’d like it to be.


There is nuance, but in the opposite direction. Microsoft did not adhere to the original court order, and fought it to the supreme court, where it was undecided when the CLOUD Act came into force and a new warrant was issued for the data held in Ireland.

It is unambiguously an expansion of Government powers. You're the first and only person I've ever come across who has argued the opposite. It's such a ridiculous thing to write that I am wondering if you're trolling me?


>There is nuance, but in the opposite direction. Microsoft did not adhere to the original court order, and fought it to the supreme court, where it was undecided when the CLOUD Act came into force and a new warrant was issued for the data held in Ireland.

What part of this do you think is incompatible with the fact that almost everyone expected Microsoft to lose the case?

And in fact, Microsoft, Apple, and Google lobbied for the CLOUD Act.

So maybe instead of accusing people of bad faith, you should have a little humility and open-mindedness to improving your understanding of the world. Believe it or not, techie discussion forums and Wired are not reliable sources of legal information, so that would explain why you're so misinformed.


> you should have a little humility and open-mindedness to improving your understanding of the world

If this is trolling, I applaud your creativity. If not, I'm in awe of the irony.


How well does this play out with things like GDPR? I can only find one sentence about it but this seems like a direct conflict.

Who wins? The USA, the EU, no one, everyone?


It's part of the reason that Privacy Shield collapsed and why the US isn't considered to offer adequate protection to EU residents. It's currently being both litigated (as more and more EU country data protection agencies make individual rulings that specific instances of transfers of personal data to US companies are unlawful) and the subject of intense political negotiation between the EU and US.

Most companies affected are currently awaiting the results of these processes, because following the current precedent to it's logical conclusion, it appears unlawful to transfer any personal data of an EU resident to a US-based company (even if that data remains physically in the EU or another adequate country). That would obviously have catastrophic consequences for the current status quo, so it's hard to believe that a compromise won't be found to avoid it.

However, it's also hard to see a compromise unless the United States exempts EU data subjects from the CLOUD Act, which seem unlikely. Hard to know where it'll go.


> However, it's also hard to see a compromise unless the United States exempts EU data subjects from the CLOUD Act, which seem unlikely. Hard to know where it'll go.

Bureaucrats are capable of breathtaking sophistry when it makes their jobs easier. If red was illegal but convenient they’d make a policy that red was actually green and argue it was until they were blue in the face.


It's not entirely clear yet who wins, but the current issues with Google Analytics in the EU seem to be partially related. Some countries have come to the conclusion that GA can't be legal if Google US has access to the data.


USA cloud services are not GDPR compliant:

https://nextcloud.com/blog/the-new-transatlantic-data-privac...


> It is also the US who made Apple back off of encrypting iCloud backups E2E.

I think it's maths preventing e2e backup.

E2E supports sending messages to known devices.

Backups need to support unknown devices in order to restore to your new device when all your existing devices are lost or broken.


> I think it's maths preventing e2e backup.

Maths and common sense. If you back up encrypted data and don’t back up the keys it’s not much of a backup.


If you opt-in to iCloud, you're opting in to a lot of state-level security risk in any country (and this is true of any commercial cloud).


Nothing stops Apple from offering e2ee backups, and in fact they do this for certain data backed up to iCloud (health data for example.)

But your iMessage data...well there, your ass is hanging out in the breeze. In fact, I'm not sure it's possible to log into an iPhone with your Apple ID and not have an iCloud backup immediately fire off, which means your private encryption keys hit iCloud and stay there until it is purged according to their data retention policies. And we have no idea what those policies actually are; those keys made end up stored forever.


> Nothing stops Apple from offering e2ee backups

The US Government pressured them to drop a plan for fully encrypted cloud backups.

>Apple dropped plan for encrypting backups after the FBI complained

https://www.reuters.com/article/us-apple-fbi-icloud-exclusiv...

If you want a fully encrypted backup of your device, you have to make it to your local Mac or Windows computer.


> Nothing stops Apple from offering e2ee backups, and in fact they do this for certain data backed up to iCloud (health data for example.)

Almost all users can't handle this; to support people, you need to be able to recover their account when they've lost every single password and proof of identity they possibly can. It's not a backup if you can't restore it.


> I'm not sure it's possible to log into an iPhone with your Apple ID and not have an iCloud backup immediately fire off

Yes, it absolutely is possible. I have never turned on iCloud backup so I have no cloud backups of any of my phones or other devices.


> In fact, I'm not sure it's possible to log into an iPhone with your Apple ID and not have an iCloud backup immediately fire off

You are correct there’s a bit of dark pattern going on here, but it is possible (to the extent the code does what it says of course). To be extra sure I have a custom lockdown MDM profile to disallow iCloud backups, as well as a number of other nefarious things like analytics, and whenever I get a new device, I first DFU restore it to the latest iOS image to ensure software (post bootrom) isn’t tampered with, then activate and install the MDM profile via a Mac and only then I interact with the device and go through setup.


We have seen reports that apple can remotely enable icloud backups and then trigger a backup.


This doesn't sound plausible in the slightest.

The only persistent connection Apple has that I can think of to implement such a concept is for push notifications. Which would be a massive security hole if a HTTP response to that daemon was capable of bypassing the lock screen, secure enclave etc.

And the logical question is if they had such a system why would they bother triggering an iCloud Backup when they could ask the device to specifically hand over certain information e.g. Messages. Which at least could be done quietly over Cellular.


> Which would be a massive security hole if a HTTP response to that daemon was capable of bypassing the lock screen, secure enclave etc.

I mean, Apple has killswitches for every iPhone they ship. I wouldn't be the least bit surprised if that suite of tools also included settings management (MacOS has such a thing built-in, fwiw).


Do you have more info about this?


Source? iCloud backups can only be triggered via your passcode which is secured against the secure enclave.


> Apple will just give the government access to your iCloud data

"You" only means you if you're a Chinese citizen.


resident


Yes, this is Apple protecting you against extralegal state actor threats. There's not really much Apple can do to protect you against the laws of your own country.


I mean, since your phone was made there by a Chinese company, what's to stop the government from just forcing a backdoor in at the factory?


and if the state actor happens to be the US? which of these tech companies do you expect to look after you then?


What makes you think so?


Because they are complying with Chinese laws regarding data localization in the country and have been known to work with China (recently YMTC chip deal, previously in a major unreported deal that was unearthed a little while ago) in order to get market access.

https://www.reuters.com/article/us-china-apple-icloud-insigh...

https://www.forbes.com/sites/roslynlayton/2022/06/08/silicon...

https://www.theinformation.com/articles/facing-hostile-chine...


How is this different than Microsoft Azure?

Microsoft handed over control of Azure in China to a Chinese company years ago.


"Apple is moving some of the personal data of Chinese customers to a data center in Guiyang that is owned and operated by the Chinese government. State employees physically manage the facility and servers and have direct access to the data stored there; Apple has already abandoned encryption in China due to state limitations that render it ineffective."

https://www.cpomagazine.com/data-privacy/icloud-data-turned-...


I really dislike that there is so much social control :( In theory is to protect you. In practice it can and is misused in so many ways that it should not be even allowed without a judge authorization.


You're kind of missing the point. The Chinese government has unlimited social control. Even if there was some sort of written law in China requiring judicial oversight, that wouldn't limit social control because the judiciary is just a rubber stamp.


Apple has abandoned encryption for everyone in iCloud. You cannot encrypt anything except a limited subset of your device's data (Apple Health data, mostly.)


In Apple's defense E2E encryption also makes it a lot easier to get locked out of your photos and device backups.

IMHO it should still be an option but only as part of Lockdown Mode, with the explicit caveat that turning it on risks losing data.


That may be true, but Reuters reported that Apple had a plan for it (which means they felt it was workable) and dropped it due to pressure from FBI/DOJ.

https://www.reuters.com/article/us-apple-fbi-icloud-exclusiv...

Also, there are many users who would benefit from e2ee iCloud backups who are not targets of NSO Group-type attacks, so I don't think it makes sense to make it only available in "Lockdown Mode".


I was all prepared to answer this with "so Reuters reporting something makes it true?", only to discover that, in fact, Reuters reported no such thing.

Reuters makes two claims:

1) The FBI talked to Apple (duh) 2) An unannounced plan to implement fully E2EE backups was no longer discussed with the FBI at their next meeting

Both of those things might be true! Reuters isn't known for just making stuff like this up, like, say Bloomberg, but the article specifically says:

"When Apple spoke privately to the FBI about its work on phone security the following year, the end-to-end encryption plan had been dropped, according to the six sources. Reuters could not determine why exactly Apple dropped the plan."

So we've got an unannounced product, which the FBI didn't like, which Apple stopped talking to the FBI about (according to some leakers at the FBI).

This does not add up to "Apple dropped plans due to pressure from [the] FBI/DOJ". It adds up to "secretive company discusses plans with secretive agency, and some stuff about that conversation leaked".


I would suggest that if you're doing anything illegal in the country you're staying in, turn off icloud sync at the least, and best policy is don't use an iphone but use an android with an open source operating system like graphene OS


> In Apple's defense E2E encryption also makes it a lot easier to get locked out of your photos and device backups.

This is likely the real reason E2E hasn't been done yet. I would wager Apple deals with orders of magnitude more people who are locked out of their phones than the number impacted by the lack of E2E backups. Trusted recovery contact added in the last iOS version is a step in a direction of providing some way to implement E2E, and still give people a way to recover.



Definitely very interesting. I know Google has their “Advanced Protection Program”[0] with a Titan security key which is similar. It is interesting considering that Google’s protections target the user as the weak link, as your data lives on their hardware; while Apple is obviously targeting both the user and the hardware they have. I’m curiuos what security researchers will think of this, if it’s more theater or if it is actually a innovative attempt at giving advanced privacy to people who need it. Despite their past stumbles (e.g., CSAM), it seems like Apple is genuinely in the privacy fight, even if it is just for their bottom line.

[0]: https://landing.google.com/advancedprotection/faq/


"About Apple threat notifications and protecting against state-sponsored attacks": https://support.apple.com/en-us/HT212960


Microsoft has a “Democracy Forward” team (previously called “Defending Democracy”) that aims to protect government officials and systems from adversarial state actors. It’s been ongoing for a few years now.

https://www.microsoft.com/en-us/corporate-responsibility/dem...


Given their track record, I'd trust Microsoft approximately 0% to secure my critical/sensitive systems. The funny thing is that the U.S. government does, in fact, trust them.


I think you're letting the reality distortion field get to your head. They're creating a safe mode for iPhones because a lot of features complex/intricate enough that they are perennial sources of vulnerabilities (and/or UX flaws that lead users to make unsafe decisions).

That is, they're turning features off for security. Something every IT department has been doing for decades. Windows supports this. Mac OS supports this. In fact, iOS was kind of notable in being so unconfigurable. The settings available in their MDM implementation were pitiful and didn't let admins disable many of these features.


One difference is that Apple is actually quite good at understanding which features get exploited in zero-click attacks and IT departments are not.


> Apple is saying "we operate at the same level as nation states; we are a nation-state level entity operating in the "digital world"

Apple's profits are bigger than my country's (Slovenia) whole GDP. You bet your butt they're a state level actor in the digital world. They have more resources than many countries.

If Apple was a country, their $365bn in revenue would make them the 43rd richest country in the world right after Hong Kong.

https://en.wikipedia.org/wiki/List_of_countries_by_GDP_(nomi...


This also points out how the increasing costs of technology and economies of scale mean that small countries like Slovenia are no longer viable on their own. The only way they will be able to survive the next few decades and avoid turning into failed states is to surrender most of their sovereignty to larger regional alliances.


And if you computed the per-capita GDP?


Hard to compute because contractors don't count towards Apple's official headcount. Comes out to $2.5mil/employee using wikipedia numbers.

GDP per capita for Slovenia is $25,179 in comparison. 100x less.

For Hong kong, which makes a bit more GDP than Apple does revenue, the per capita number is $46,323. 50x less than Apple.


Also silly to compare because a proper nation-state does more than develop products and services for profit. Social contract and all that.


My understanding is that the "social contract" inside many of these large companies is quite cushy. Especially in USA where being employed comes with services traditionally provided by the state like health care, child care, free or subsidized food, retirement benefits, etc.


It's not especially comparable to what an actual government has to deal with though. It's superficially similar I guess.


It also doesn’t compare to obligations of countries that fund retirement, welfare, public health, or similar


> Apple are saying "we will protect you from state actors", which is a role usually performed by states

Not to sound flippant, but defense attorneys do this, too. I don't think it's as big a zeitgeist as you think


Good to know.

I think Apple's announcement (and as I've learned from this thread MS's and Google's similar programmes) represent a significant step-change. A single defense attorney performs this action on a case-by-case basis, and they earn "single human" levels of income from it. They (to some degree) use that money to make themselves comfortable and perhaps share it with charities and make investments. All the defense attorney's in the world combined still, probably, have access to a fraction of Apple's budget, and a fraction of Apple's audience. Defence attorneys don't always win all their cases.

Apple have the kind of money that makes 1000s of attorneys envious. Apple use that money to make infrastructure and client devices and then sell/share that technology with billions of people. Most of Apple customers buy their phones on loan agreements over some contract time-frame. It's "cheap", and the protections are automated.

I'm tired and getting rambly about this now, but I intuitively feel like the combination of state-level power (albeit exercised with a very narrow focus), and the way so many people live their (digital) lives interacting with a "noosphere" that crosses international borders are facets of a complex phenomenon we have not witnessed before, and which will merge with other related facets and then emerge as something really different. I accept I'm getting very fuzzy in my thinking here. I'm leaning into my inner sci-fi author (who's not come out for 30years and wasn't too talented when it did).


Excellent points.

I don’t think it has sunk in for most HN readers that every iPhone since iPhone 8, released in 2017, can upgrade to iOS 16 and get Lockdown Mode.

Apple is providing some protection against state-level actors at scale--hundreds of million devices. That is a step change!


Google has been dealing with nation state actors targeting its users (Gmail specifically) for a decade now. They have Advanced Protection program. We actually regularly used to hear about how human rights activists were targeted in spear phishing campaigns and then arrested.

https://landing.google.com/advancedprotection/


Apple blocking a few features means it's now operating as a nation state.

Tell me it's a Hacker News comment without telling me it's a Hacker News comment.


agreed, the rise of the corporation as the most powerful institution (above the nation-state) in this new budding global civilization is a long time coming.

on the other hand, this is how democracy dies. what structures (systems) exist to prevent apple (and other comparable corporations) from being an oppresive force against human persons? moreover, what incentives do they have?


I can think of a few, at least applicable in the USA:

Apple doesn't have a military or police force with jurisdiction over me. They don't have the legal power to arrest me or throw me into prisons, which they also don't have. I don't have to pay taxes to Apple. I don't have to do business with them or interact with them in any way if I don't want to. I don't need Apple's permission to do anything unrelated to their product lines.

Same is true for any megacorporation. It's a big stretch to say they are even remotely as powerful as nation-states, let alone more powerful.


Yes, the state's monopoly on force is to me what truly differentiates them into a different category of power than a corporation. Also international recognition for nation states and being able to have treaties and the like, but really its the monopoly on use of force. That said, I think the rise of charter cities (think of an SEZ on steroids run by a private corporation) will blur the lines further, although most proposals I've seen for charter cities leave policing to the locality they're residing in.


Mandatory taxes, interest rates, printing money… nation states have a lot of power.


> interest rates, printing money

Many nation states don't have control over interest rates (because their central banks are run independently of the government) or even the ability to print money, if they have adopted another currency.[0]

> Mandatory taxes

States typically tax transactions which happen on their territory (e.g. wages and sales), and in the case of Apple, their devices are their territory, like feudally controlled tracts of land in cyberspace. Taking a cut of all app sales and in-app purchases seems very much like a tax under this analogy.

[0] https://en.wikipedia.org/wiki/Currency_substitution


>Many nation states don't have control over interest rates

And many others do. The State can abdicate such power and it usually does in stable economies where markets can self regulate. Given a big enough crisis, however, and the State will usually take that power back.

>or even the ability to print money, if they have adopted another currency.

Usually in cases of near total State bankruptcy

>Taking a cut of all app sales and in-app purchases seems very much like a tax under this analogy.

That's an interesting take.


> I don't have to do business with them or interact with them in any way if I don't want to. I don't need Apple's permission to do anything unrelated to their product lines... Same is true for any megacorporation

Nope. You can avoid buying an iphone, but you cannot escape Google. I'm often forced to "do business" with google. I've seen several government websites that require code hosted on Google's servers. I need Google's permission to do all kinds of things unrelated to their service (reCAPTCHA) and google will track everywhere you go online even if you never use any of their services. Facebook also doesn't give you any option. They'll create a profile for you and start collecting data on you even if you've never created an account. You could argue that you pay these companies taxes in the form of your data rather than money, or that the fees they charge developers drive up consumer prices (acting as a tax on the purchases), and I suspect that should Apple/Google pay become more commonplace they will start charging a fee (tax) for that as well. Nothing stops them from doing it.

Some corporations even have their own literal armies (Blackwater/Xe/Academi), but others don't bother because they have the ability to command the police and military wherever they are. The RIAA have their own "swat" team. They participate directly in raids breaking down doors and handling evidence.

Companies like Apple and Google are far more invasive than police watching everything you do, listening to everything you say, recording every person you're in contact with. They censor and ban with impunity. If they really wanted to, they could plant data on your devices that would get you arrested and thrown in prison in any country around the globe.

corporations might not yet be as powerful as a nation state, but they're a lot closer than you give them credit for, and they likely have more direct influence on your day to day life and what happens to you.


No, they're nowhere close to being a nation state. Those spheres of power are nothing compared to something like the British East India Company, which had a currency, an army, and forcefully controlled almost 2 million sq. km. of Asia.

Captchas are definitely worthy of criticism, but they are not remotely on the same level as forcefully controlling the land under someone's feet.


To be fair, banks have been more powerful than a lot of nation-states for awhile, and religious entities before that.


The religious entities I get the argument but what banks have been more powerful than nation states?


The Knights Templar were a religious organisation, but also a quasi-banking institution in Europe; they took and protected deposits of gold, and issued 'cheques' allowing, for example, travellers to deposit gold in London and spend the money in Southern Europe. They were dissolved because they were beginning to rival the Papacy and nations in power due to their immense wealth.

Also, few know this, but many African slaves who were victims of the slave trade became slaves due to debt-slavery (though this didn't involve formal banks). I've seen estimates of up to 25% of slaves back then having been debt-slaves.


Yes! I had heard a bit about the Knights Templar, I guess I would have categorized them as religious first, financial/governance functions second. But also the Order of Malta had quite a lot of power, to the point I believe that it is still recognized by the UN!

I hadn't realized that about African slaves; debt for what?

https://www.un.int/orderofmalta/about#:~:text=in%20your%20br....


the ones that only service other banks hence only people working in higher level banking are likely to have heard about. e.g. the bank for international settlements

I only found out about this bank because the former president of the mexican central bank -- Mr. Carstens, left the central banking gig to go to that bank.


From reading their Wikipedia quickly sounds like BIS has a similar function to say the IMF when it comes to financial system stability. I do agree these sorts of organizations exert huge amounts of influence, especially for smaller countries that are dependent on loans and outside financing, but I'm not sure I agree they are more powerful than a nation itself. A nation can (theoretically) decide to opt out from these systems and operate independently, or can play different parties funded by nations (because in the end they all are working for someone's agenda) off of one another as many countries did during the cold war between the U.S. and Soviet Union. But if a nation reneges on its debt, the BIS, IMF, etc. isn't going to invade your country--one of it's creditor nations might, but not them.


The BIS is just a counterparty to facilitate payments between nations. It doesn't exert influence in international affairs (except really via the BCBS [1] which sets the Basel capital accords defining how much capital banks have to hold and therefore does have a lot of influence behind the scenes on how banks operate anyway). When the US says it's going to give $100m in aid to some country or one country pays back a debt to another country, there needs to be someone to process the payment, and that someone is the BIS.

Source: friend used to work in the BIS and I've also been involved in banking off and on for a long time, including dealing with various international banking regulators.

Some fun BIS facts:

1) They process payments via regular SWIFT[2] messages. So the $100m in aid comes as a message just the same as if you transfer $5 from one bank account to another. It has an IBAN number with a regular bank account, so if you changed that to your own account details and the message was processed suddenly $100m would appear in your checking account instead of going fund an aid programme for some government in Africa or whatnot.

2) The number of payments they process is very low (>100 per day max and usually in the low tens of messages) so every payment message is checked by hand by several independent people as well as having automated checks. Partly to avoid the risk of funds getting sent to the wrong places etc.

3) My friend worked there in the 90s and said that even back then they had extremely strong security with multifactor biometrics on every entry to the premises. You got in via an entrance where you had to step into a cylander which would only unlock after it had taken multiple photos including an iris scan

[1] https://www.bis.org/bcbs/

[2] https://www.swift.com/about-us/discover-swift/messaging-and-...


Based on their history of using their control over the App Store to "protect people" from such harmful content as content about how smartphones are made in sweatshops and tools (such as VPN clients, but also for a long time cryptocurrency wallets) that allow people to bypass restrictions put in place by these nation states that Apple works with, I'd claim these incentives are pretty shit :(.

https://www.youtube.com/watch?v=vsazo-Gs7ms


If you try to get into cryptocurrency your phone should automatically deliver electric shocks until you stop.


Corporations definitely have a lot of power today, but nothing more than they've had in the past.

https://en.wikipedia.org/wiki/Company_rule_in_India


Apple is a public corporation and votes on its corporate direction are freely available on the open market for anyone to purchase. Based on my share ownership Apple is much more subject to my whims than my actual elected politicians are on a % basis.


"It's the first such flag-raise I've seen."

I was dripping with disdain and sarcasm as I clicked "reply" but I actually want to engage you and have you seriously consider the history of oil and gas exploration and extraction.

This may, in fact, be a first for a US tech company ... but not in any way whatsoever a first for a business interest or corporation, etc.

This is also a very tame, roundabout and implied flag-raise - as opposed to "... summary execution, crimes against humanity, torture, inhumane treatment and arbitrary arrest and detention ...":

https://en.wikipedia.org/wiki/Ken_Saro-Wiwa#Family_lawsuits_...


Good points. I had not thought about oil at all. I’ve become aware through light skimming that wars, coups and similar incursions have occurred around other resources.

This feeds into the point I was aiming at. The tech megacorps now have tooling to protect a narrow aspect of their customers lives from state incursion, regardless of which country that customer live in. I’ve not read the link you shared yet, so I don’t know the angle it takes or the angle you want to dig into my ideas from.

Thanks for sparing me the satcasm.


It's marketing and you ate the hook, line, and sinker.


Counterpoint - the EU has been passing laws that force apple to be more fair in their markets, and this "we're protecting you from bad guys" stuff is apple trying to figure out deniable methods to protest or sue against the EU passing laws to restrict apple's ability to lock other developers out.

Throw together a basic set of options that should have been available long ago, now apple is protecting you, don't strip apple of the ability to protect you, etc.


> from a corporation with more spare, unrestricted capital than many countries

... than most countries. There are only 7 countries with a higher GDP than Apple's market cap.

I have been concerned for some time about these mega corporations being as powerful if not more powerful than governments. They wield tremendous economic and political power. Corporations have very little allegiance to countries and have little to check them. It is a major concern of mine. Democracy in the U.S. is already being sold to the highest bidders.

These corporations are feudal lords but much, much more powerful because there is not a single person who can be brought down. Corporations are a collective who are treated as people when it's convenient and as something else when it's not.

It's bothersome to me, because these corporations are tax sinks. They get absolutely massive tax breaks on everything they do and pay as little as possible income taxes, comparatively speaking, all the while keeping billions offshore.

Billionaires and mega-corporations are national security threats to the countries that house them.


Market cap is a measurement of value. GDP is a measurement of output. Comparing the two doesn't really make sense.


Pick whatever comparison you'd like, and the rest of my comment still stands. What you said may be true, although I'm not sure it's as simple as you state. But debating the semantics of the exact comparison used isn't really important to the sentiment I espoused.


Apple is following the lead of Microsoft in this regard. Microsoft has been acting as an international cyber defense agency for a few years. On the effectiveness of Ukraine's cyber defense: "Microsoft in particular has been hard at work" 21:45

Assessing Russia’s War in Ukraine

https://youtu.be/CzbsPOaCrLw?t=1305


> It's the first such flag-raise I've seen.

After the Snowden leaks that showed even in-country citizen-to-citizen communication was being scooped up by the NSA without a warrant through fiber taps (if I remember that right) when Google replicated the data to out-of-country data centers, Google announced encryption of those links:

    Google encrypts data amid backlash against NSA spying
https://www.washingtonpost.com/business/technology/google-en...


This is good news IMHO because it encourages that companies compete for the best offer in that space as they go.

In some way it reminds me (with all the differences!) of how things like cryptocurrencies could remove the state from a monopoly.

Good news for me this announcement!


What they are doing is giving users an easy-to-use option to sacrifice part of the default user experience to enhance security by disabling features that are common vectors (which happen to be used by, as they phrase it multiple times in the announcement, "private companies developing state-sponsored mercenary spyware").

IMHO, whatever the reason why they are doing it, it's a good addition to their value proposition; but I don't think it's the same as what appears to be your understanding ("they will protect users from state actors"), at all.


A nation state has more than one way of extracting information from enemies of said state. There's the civilized way we now call hacking, and then there's the traditional way, which may or may not involve technology.


I dislike big tech as much as the next hacker, but this seems like quite a leap. Protecting from nation-state actors digitally can be a job for digital powerhouses. In this case, the hackers are just very determined hackers with a lot of resources. Apple is a very motivated company with a lot of resources. Slightly to your point though, they have higher income than 96% of the countries on the planet. So they have the wealth to establish an Appletopia.


> Apple is saying "we operate at the same level as nation states; we are a nation-state level entity operating in the "digital world": It's a flag-raise

Maybe. But these security “features” feel like things that should have been there from the beginning. Windows 11 has already had a much wider and deeper array of security options. Sure, it’s not mobile, but many of those security options would be unlikely to be needed against unsophisticated attacks.

Flag-raise or marketing gimmick? You be the judge I guess.


This feels like an argument the government would make against strong encryption like in the case a few years ago where the government tried to force Apple to unlock an iPhone and Apple refused claiming it wasn't possible.

Apple are basically saying that they're going to do their best in terms of security measures to thwart even state actors, which is only as much of a nation-state level thing as "military grade encryption" is a thing only applicable to militaries.


> It's the first such flag-raise I've seen

You haven't been paying attention. Many tech companies have been protecting accounts from state attackers for many years, and explicitly calling out state sponsored attacks. Google introduced state-sponsored attack warnings in 2012 [1] and the Advanced Protection program explicitly protects from state sponsored attacks [2].

[1] https://security.googleblog.com/2012/06/security-warnings-fo...

[2] https://blog.google/threat-analysis-group/protecting-users-g...


Many tech companies have been protecting accounts from state attackers for many years…

How many people have Microsoft and Google actually helped?

Incase you didn’t notice, Apple is in the process of giving a few hundred million iPhone owners--every iPhone since the 2017 iPhone 8--protection from state-level actors, for free, in the next operating system update due this fall.

It totally dwarfs anything that any other company has done in this area. So there’s that.


Google sent more than 50,000 state sponsored attack warnings in 2021. And those warnings started in 2012. So a lot of people have been helped. Meanwhile Apple didn't start doing similar warnings until less than a year ago.

> Apple is in the process of giving a few hundred million iPhone owners

Um, no? Lockdown mode is explicitly for "very few users". There's no way a hundred million iPhone users would benefit. Google's Advanced Protection offers protection from state-level actors to anyone with a Google account, so if you want to count by the number of people offered optional protection, Google wins by a landslide.

> for free

Haha, no, you have to buy an iPhone from Apple first. Google offers protection to anyone actually for free. All you need is a free Google account and a security key which doesn't have to be purchased from Google.


Since we're being pedantic…

The point is the several hundreds of millions of existing Apple customers who own an iPhone 8 or newer are going to get Lockdown Mode in the next version of iOS for those "who may be at risk of highly targeted cyberattacks from private companies developing state-sponsored mercenary spyware" at no cost.

While it's true that very few iPhone users should ever need to activate this feature for the described use case, Apple has already indicated there will be more features added in the future where this could change.

There are likely additional use cases where an iPhone user may want to activate Lockdown Mode, such as traveling to an authoritarian country.

This article makes the argument that Lockdown Mode could benefit iPhone users who never activate it. [1]

[1]: "iPhone Lockdown Mode could benefit those of us who will never use it"—https://9to5mac.com/2022/07/07/iphone-lockdown-mode/


"state actors" doesn't mean the US government in its full force or any other government Apple is in bed with to make money (like China).

It means in the best case shady agencies, foreign services, small governments, and in the likelier case just unhinged people with some access to state facilities (tax employees, unofficial police investigations, lawyers...)


No, they aren't, any more than an OS claiming "military grade encrypted boot drive" means they have a military.


Since the software is still proprietary, considering these statement as guarantees is just an exercise of faith.


They don't "operate at the same level as nation states", protecting against state actors isn't the only thing in that level, unless you mean cyber-security only. Abstracting this to anything "nation-state level entity" is the crux of your argument.


> It's the first such flag-raise I've seen.

“Flag-raise” seems a bit hyperbolic but at any rate I think the BSA asserted such reach and power, long ago. Both have to act within the oversight of actual nation states.

Beyond that, a secure phone is necessary but not sufficient to defend oneself against a nation state.


Nothing new. When states requested access to covid DB apple and Google refused access based on what happened in the Netherlands in WW2.

I must that on one hand it’s anti-democratic, on the other hand western democracies have a rather poor track record on safeguarding this kind of info.


I don't know if you've been paying attention to Apple's strategy over the last year, but it's basically been "granting user privacy also happens to grant us an advertising/data monopoly"

I don't think the aim here is to block at state actors but to basically continue to close all security holes that can be exploited by any other company and continually proving to users that Apple cares about privacy.

The things is I really like Apple even more now since they have realize that my privacy interests can be tightly aligned with their own economic interests. I never trust companies to be good or look out for my interest even when I pay them to, but when my privacy ultimately means they gain a very strong competitive edge the I'm much more trusting.

Apple has realized they can become to privacy what Google has been to ubiquitous search, and doing so can reap even larger and more secure rewards.

They started with a walled garden and now extending it to fortress surrounding the garden.


> advertising/data monopoly

not to be glib, but 'citation please?'

Other than running ads inside the App Store, do you have any knowledge or evidence of Apple collecting personal information for advertising or any other use?


> It's the first such flag-raise I've seen

Zuckerberg, 5 years ago: https://www.youtube.com/watch?v=mFPAe8Tc2NE


Perhaps "first credible" is the correct description.


I'm not so sure about that; I'm not that impressed by that list of features.


Apparently that protection does not include protection from the US government.

iMessage offers excellent privacy of message content, but no 'pen register' protection.

Phone device security is very strong, but it's made largely moot if you turn on iCloud backups (which is the default behavior if you provide an Apple ID. I'm not sure there's even a way to stop the initial backup from happening?)

Apple reportedly doesn't offer e2ee on iCloud, or even encrypted device backups, out of compromise with the federal government...specifically the FBI, CIA, and NSA.

Why might people care about this? Criminalizing abortion and miscarriages...and what looks like at the very least a re-recognizing, and possibly criminalization, of LGBTQ relationships.


True, Apple could stop nagging about backing up into iCloud.

Apple should offer other sorts of backups, and offline iCloud systems.


They do offer other sorts of backups.

You can backup to a Mac or PC. And it's offline and encrypted.


When Apple says "state actor threats" they're not talking about future-state theoretical breaches of domestic privacy by your own government. Apple is always going to follow the law. They're talking about the types of situations where data from people's phones is used to commit international criminal activity, espionage, assassinations, etc.


I know it is a very polemic topic but I genuinely think abortion is a crime, not a right.

Not anything against LGBTQ or the like, though.


I think you need to put away the pipe, this is Apple saying "we can't make JIT work safely so here's an option to turn it off".


> Apple saying "we can't make JIT work safely so here's an option to turn it off"

To be fair has anyone made it work safely ?


This is more like “there are always going to be zero-day exploits out there and until we can fix them, this is the next best thing."


Do you also believe the earth is flat?


This is great, but also clever.

By offering users a more locked down option with clear tradeoffs, (a) users can make a choice between security and convenience, and (b) given user agency, negative press around hacks of not locked-down devices loses potency.

Meanwhile, the choice seems straightforward on most of these...

Lockdown Mode includes the following protections:

- Messages: Most message attachment types other than images are blocked. Some features, like link previews, are disabled.

GREAT!

- Web browsing: Certain complex web technologies, like just-in-time (JIT) JavaScript compilation, are disabled unless the user excludes a trusted site from Lockdown Mode.

GREAT!

- Apple services: Incoming invitations and service requests, including FaceTime calls, are blocked if the user has not previously sent the initiator a call or request.

GREAT!

- Wired connections with a computer or accessory are blocked when iPhone is locked.

GREAT! (Used to have to do this yourself with Configurator if you wanted to be hostile border-crossing proof.)

- Configuration profiles cannot be installed, and the device cannot enroll into mobile device management (MDM), while Lockdown Mode is turned on.

HMM ... there are hardening settings only available through Configurator or MDM profiles. Will those be defaulted on as well?


>> - Configuration profiles cannot be installed, and the device cannot enroll into mobile device management (MDM), while Lockdown Mode is turned on.

> HMM ... there are hardening settings only available through Configurator or MDM profiles. Will those be defaulted on as well?

Reading between the lines here - on lockdown mode, you can't install a profile, or enroll in MDM. What it doesn't say, is that you can't enable lockdown mode with a profile installed, or if enrolled in MDM.

I take this to mean, with lockdown turned on, I can't install profiles or enroll in MDM (but presumably could uninstall profiles or unenroll from MDM).


I also take it to mean that most of these hardening features will be able to be enabled by configuration proflies / MDM anyway. Lockdown mode being essentially "Apple being the MDM for individual who don't otherwise have an MDM."


Correct. Existing MDM profiles will be unaffected.


>- Configuration profiles cannot be installed, and the device cannot enroll into mobile device management (MDM), while Lockdown Mode is turned on.

>HMM ... there are hardening settings only available through Configurator or MDM profiles. Will those be defaulted on as well?

Yes, that one leapt out at me as well as kind of an awkward one with more compromises, painting with a very broad brush. It's obvious that some of the very powerful config profiles/MDM capabilities could be used for a lot of mischief, but some of them are also exactly what I'd want to be running myself if I was at a lot of risk, and some are both. Ie., continuing to have one's own offline based CA with proper Name Constraints could be handy for a group of people who want to try to better secure and keep private their own internal network services from anything short of a government physical assault, but if an attacker can slip on a profile with an unlimited CA your goose is cooked.

Perhaps Apple simply doesn't have the capability for fine grained control of those capabilities yet, which wouldn't be surprising given their path up until now. I'll be interested to see if over time Apple leaves this mostly untouched or invests in seriously improving it. Like it'd be interesting if you could boot into a special mode ala DFU though requiring password and with graphics up and have a bunch of toggles for various capabilities that would then be enforced in normal usage. Analogous to the Recovery Mode on Macs.


You can simply enable those MDM profiles then enable Lockdown mode; they will stay on. You just can't enable new ones while Lockdown mode is enabled.


Perhaps Apple simply doesn't have the capability for fine grained control of those capabilities yet, which wouldn't be surprising given their path up until now.

I have to believe they’re working on exposing some of this via MDM. Certain organizations may never want the JIT turned on, for example or allow attachments in iMessage.

I expect we’ll hear more about more capabilities this summer and fall.


Do you really trust your average IT department to make an informed decision about whether WebKit JIT is currently secure or not? I don't see Apple putting these in MDM Configuration Profiles. If they do, it will only be for Supervised Devices (i.e. devices owned by your employer, must be wiped to enroll).


Do you really trust your average IT department to make an informed decision about whether WebKit JIT is currently secure or not?

In general, no.

For specific website or web apps, yes.


I'm worried about that last point simply because I assume that most corporate CEOs would need MDM enabled to access their corporate email, which means they won't be able to use this feature despite being prime targets.

Still a really good feature for those that qualify, though.


I'm worried about that last point simply because I assume that most corporate CEOs would need MDM enabled to access their corporate email…

As long as the MDM profile is installed before using Lockdown Mode, they’ll be fine. They just can’t install an MDM profile once the phone is locked down, which makes sense.


Oh, well that's way better. Thanks for clarifying!


I believe you can still have it, you just have to set it up before you put the device in lockdown mode.


Ideal approach would be to be able to manage all those features individually via MDM. This way corp admins would be able to lock down managed phones while bringing necessary configuration to access corp services.


Last year I wrote: "In the world I inhabit, I’m hoping that Ivan Krstić wakes up tomorrow and tells his bosses he wants to put NSO out of business. And I’m hoping that his bosses say 'great: here’s a blank check.' Maybe they’ll succeed and maybe they’ll fail, but I’ll bet they can at least make NSO’s life interesting." [1]

Maybe this is the blank check :)

[1] https://news.ycombinator.com/item?id=27897975


I hope Apple expands this quickly through minor updates to the OS rather than waiting for a next major release. This needs faster iteration than anything else.

Quoting what’s in the first release:

> At launch, Lockdown Mode includes the following protections:

> Messages: Most message attachment types other than images are blocked. Some features, like link previews, are disabled.

> Web browsing: Certain complex web technologies, like just-in-time (JIT) JavaScript compilation, are disabled unless the user excludes a trusted site from Lockdown Mode.

> Apple services: Incoming invitations and service requests, including FaceTime calls, are blocked if the user has not previously sent the initiator a call or request.

> Wired connections with a computer or accessory are blocked when iPhone is locked.

> Configuration profiles cannot be installed, and the device cannot enroll into mobile device management (MDM), while Lockdown Mode is turned on.

I’m not a target (I think, and hopefully don’t get to be one), but nevertheless I’d feel safer with this turned on (I very rarely use FaceTime, so not accepting it is not a big deal).

I’d also love more protections. Not allowing specific apps to connect to any network (WiFi included), Apple handling issue reports on apps with urgency (right now they seem to be ignored even when policy violations which are against the user’s interests are reported), etc.


I think it's reasonable to think Apple will iterate quickly on this.

Why? The iOS 15.x update history.

https://en.wikipedia.org/wiki/IOS_15

Lots and lots of privacy stuff in the point releases. (And accessibility stuff, they’ve been on a tear there.) They’re still in a monolithic mindset when it comes to the “big” apps, but they’re iterating faster on these sorts of things as the release cycle goes along.


You might have missed that Apple announced realtime security updates at WWDC [1].

[1]: https://techcrunch.com/2022/06/07/apple-introduces-real-time...


That includes fast, no-reboot, and invisible-to-the-user security patches, not improvements in features like Lockdown Mode.


Yup, I sure did.

That…is seemingly a thing they should have done a long time ago…but it’s still smart, and I’m glad they’re doing it. Now they don’t have to rush the QA of a point release to vanquish yet another PDF parsing security threat.


> I’m not a target (I think, and hopefully don’t get to be one), but nevertheless I’d feel safer with this turned on (I very rarely use FaceTime, so not accepting it is not a big deal).

Good. We need people with nothing to hide to turn Lockdown Mode on, so that Lockdown Mode isn't a telltale signal that you have something to hide.


Aside from the JIT change, those all sound like pluses to me!


This is great but too big of a hammer for most use cases. What I really want is a per-application firewall.

For example, say I would like to install a photo editing application. It would need access to my photos. That is fine, so long as it is not allowed to connect to the Internet (or any other network). There is currently no way to ensure this.


> This is great but too big of a hammer for most use cases.

This is not in any way intended for most use-cases, it's very clearly intended for a single, specific, uncommon use-case. The press release says as much more than once.


I guess my point is that instead of making a special mode that is only useful for a minority of users, it would have been really nice to get a feature that everybody should be thinking about and using.


Different people who specialize in different aspects of security can be working on different things at the same time; and contrariwise, experts have comparative advantages and would be mostly wasting their time working outside their nich.

In other words: there's no "instead" here, any more than there's an "instead" between e.g. UI work and backend server work. Different people, different competencies, concurrent capacity.


Every time I have allocated labor on a software project, I was mostly playing a zero-sum game. I am surprised to learn that Apple does not have such problems.

Regardless, I was just lamenting that we don't (yet) have a feature that should be table stakes at this point.


Perhaps that's what it eventually evolves into. Probably easier to get this off the ground by developing it as a separate mode.


Agreed. I wish iOS had a "network access" permissions just like Android does. (Though to avoid permission fatigue for the average user, perhaps make it something only users that care can deny)

That said, I think this is pretty unrelated to protecting yourself from nation state actors. Mercenary spyware (like NSO) doesn't use a legitimate app store app as their initial infection point. I can think of many reasons for this: difficulty getting target to install it, app store approvals, leaking their 0days, leaving more of a paper trail, and avoiding scrutiny in general, etc. I'd of course love this feature for my own data privacy of course.


It's not exposed in the UI, but if you really care, you can just create yourself a configuration profile that disables various per-app permissions (including network access, per-domain/per-IP/per-certificate) on a fairly fine-grained basis. MDM yourself.


Would be interested in this with a simple interface


> (Though to avoid permission fatigue for the average user, perhaps make it something only users that care can deny)

Yeah, I would not want to have to approve every app. What I would like is a machine readable description of the app's capabilities to include Internet access, just as is required for access to the microphone or photos. This would encourage app developers to advertise to users that they don't need such capability and encourage users to realize that privacy and Internet access are mutually exclusive.

There are many small apps I simply will not buy/install (e.g., apps for editing photos or contacts or calendars) because they cannot be trusted. Even if you trust the developer, the developers are often embedding third party analytics libraries that cannot be trusted.


This feature exists in Chinese iPhones because it's required by law there.


Stock Android doesn't allow denying network permissions, it would eat into Google's ad revenue


I'd go a step further, and say per-application virtualization. Every single program running its own (ideally encrypted memory) namespace, with its own assigned memory, etc.


That's what the ios sandbox provides. Heck, the tools arm64 gives you to isolate VMs are awfully similar to the tools they give you to isolate processes. VM escapes aren't too different than sandbox escapes.

Encrypted memory isn't part of arm yet, I was holding out hope with armv9 "realms" but not so.


I think this is one of the very (very) long term goals of the GrapheneOS project.


So basically Qubes OS on a phone?


I use little snitch for this, but I agree, a big hammer, and likely more hoops for regular developers to jump through. Notarisation, signing, forced developer keys...


I use Little Snitch on macOS, but it is not available on iOS, so far as I know. Normal apps on iOS do not have enough visibility into the system for that.


Android exposes a soft VPN API that firewall apps can use to block network traffic for certain apps in certain scenarios (say, no Google Play updates when on mobile data) with apps like Netguard [1].

Does iOS not expose such functionality? Surely there's some kind of VPN API?

[1]: https://github.com/M66B/NetGuard


> Android exposes a soft VPN API that firewall apps can use to block network traffic for certain apps in certain scenarios (say, no Google Play updates when on mobile data) with apps like Netguard.

I worked on AOSP for longer than I care to admit. This is mostly an illusion. System apps (like Google Play) can pretty much do whatever the heck it is that they want to. NetGuard, sure, "firewalls" it... but it wouldn't even know if a system app bypassed its tunnel. For installed apps, NetGuard is golden (as long as NetGuard itself doesn't leak).

disclosure: I co-develop a FOSS NetGuard alternative (and yes, this alternative has similar limitations).


Interesting, and disappointing. Do you happen to know what mechanism is used to bypass the VPN configuration?

I'm using my VPN as a Pihole tunnel and I don't notice any extra logs or requests when I turn off the VPN, but I may just be lucky. I did purge a lot of preinstalled Facebook crap…


It isn't that System Apps actively bypass the VPN tunnel, but they can if they want to, on-demand [0]. That is, System Apps retain the ability to bind to any network interface. Whether they do so, is anyone's guess.

For installed apps, there's no such respite, iff one enables 'Block connections without VPN' (the VPN lockdown mode) on Android 10+ (but NetGuard doesn't support it). This means in the times when NetGuard crashes or restarts (which it does on network changes, for example, or even on screen-off/screen-on, from what I recall), there's a chance the traffic flows through underlying interfaces rather than the tunnel (because the tunnel simply doesn't exist in the interim).

Datura (ebpf based) on CalyxOS and AfWall+ on any rooted Android can block out everything it pleases, though.

I don't mean to downplay NetGuard, because the codebase has evolved in response to years of addressing flaky networks, flawed apps, buggy Android forks. Marcel, the lead developer, has put his life's work into it and gave it away for free. The app I co-develop is, in fact, inspired from his efforts.

[0] https://github.com/celzero/rethink-app/issues/224


I see, thank you for explaining! Good to know that rooting your phone still has some benefits. I wouldn't have thought that there's such an easy bypass for system apps, but I suppose it makes sense for some modem/carrier apps to specify an interface.

I absolutely love Netguard even though I don't really use a firewall in practice (I was sort of hoping a permanent VPN with some "real" traffic meddling would be enough to block most violations of my privacy). It's the one rootless firewall that actually just works on practically any device you can think of, among a sea of broken/scammy firewalls that fail all kinds of edge cases.


> It's the one rootless firewall that actually just works on practically any device you can think of, among a sea of broken/scammy firewalls that fail all kinds of edge cases.

You should try the one I am building (: Promise, no scams in that one: https://f-droid.org/packages/com.celzero.bravedns/


Android has app system level options in the settings to disable WiFi/mobile data.

I tend to use that, and use Netguard as a fallback because the latter has an off by default config incase I forget to disable it for new apps.

Netguard on its own is insufficient because sometimes you'd need to use an actual VPN (which turns off Netguard)


I've had those options on multiple OnePlus phones, but they were not present on multiple Pixels. Since Pixels are usually sold as "AOSP experience with Google flavor" are lacking this feature - I am not sure if that is that feature comes from AOSP or is only present on OnePlus phones.


I've generally found them on most Android phones, but they're all over the place in the settings. On my current phone they're not in permissions, or connections, or internet setup, or security, but they're in the app details screen.

I've also seen the toggles placed in the data usage graph, the other, older data usage graph you can sometimes find via a workaround, and in a separate app that pretends to be one of those system storage optimizers.

I'm sure Android supports it at the system level but how you get to those settings is anyone's guess, really.


iOS has APIs for VPNs and “content blockers”. But as far as I know, such a filter has no access to know which process/application is trying to make a connection. Little Snitch on macOS has to install code into kernel space. (Or at least it used to; I have not reinstalled in a long time.)

The Android app you link to seems to have the functionality I think should exist as a built-in. It needs to be built-in so that non-geeks can use it.

Just as users are asked the first time an application attempts to use the microphone and are able to prevent it before it starts, they should be able to limit network access and revoke it at any time.

(I don’t think users should be necessarily be forced to approve Internet access for every app install. Just make it possible to revoke in the global Settings widget and encourage users to think about personal data and Internet access being mutually exclusive.)


Not like that. The idea is antithetical to Apple, who have said during keynotes that they've tried to avoid doing so, because what they really want is a world where the concept of "mobile data" is not limiting.


Little Snitch is great. Apple would never allow it on iOS which is ridiculous.


It's not the same, but have you used App Privacy Report to monitor what your iOS apps are doing?

https://www.wired.com/story/ios-15-app-privacy-report/


Thanks for posting this. I just turned it on and am looking forward to the report.

It's under Settings > Privacy > App Privacy Report.


The App Privacy Report is great, but too late. It shows you what an app did, not what it might do.


None of which is particularly effective since it's trivial to setup a legal entities that makes one game but signs a bunch of malware (or steal enterprise keys).


That would be a pretty interesting VPN service if you could easily deploy it as a docket container. Something simple that could give Little Snitch like whitelisting.

The Charles proxy iOS app doesn’t have the ui to support this, it’s clumsy to whitelist domains, but it does provide some visibility into what domains are being accessed.


Edit: apparently I was wrong here? Though I'd swear it had the feature?


It does not ask for internet access, it asks for access to other devices on the LAN. Not the same thing.


You can disable app's cellular data access, but that's it, at least on Western phones. Ironically, phones for the Chinese market actually expand that setting and also allow to block Wi-Fi access.


As a Chinese user, this is the first time I heard that blocking WiFi access on iOS is China only. How confused I was when reading the comment above you, given I'm already capable of blocking network access for any iOS App.


Where do you see this in iOS? The Settings app has many permissions for applications, but no "Internet" permission.


You can turn off cellular data access to an app; not quite whole internet as this WiFi will still work. But it’s half the problem.


I am aware of that option. It is on the screen I just described. That is really just for saving bandwidth where it is expensive. It is in no way intended as a security measure.


"Web browsing: Certain complex web technologies, like just-in-time (JIT) JavaScript compilation, are disabled unless the user excludes a trusted site from Lockdown Mode."

Highly interesting, that Apple is doing this. This is a thing. MS and Google are also taking steps to harden Chromium security against JIT compiler issues with JavaScript. https://www.zdnet.com/article/securing-microsoft-edge-switch...


I just don't want most of the programming capabilities on the web, plain old hypertext with a bit of style is enough. There are plenty of other ways to run software on a computer than inside a web browser.


I agree half way with you, we need the web split into 2 parts, webpages and apps.

I seen some cool simulation, small apps, small games that I can just test online and not have to install them on my machine. Apple would love that we all got scared and only use installed apps from their store but the web is a decent deliver platform.

If we could have a modern subset of html and css for news websited and blogs , and the rest of js for web apps then you can have the option to turn off teh advanced settings or we could have different browsers that could focus on different things, like a website reader browser that does not care about super fast JITed JS it would not support webgl,camera or microphone acccess, it would just focus on text layout and simple forms,

and a web app browser that focuses on extreme optimizing for JS , canvas and webgl operations, camera and microphone access.


I'm having fun with Gemini exactly because it's so dumbed down that you can't do anything more than publish text

It's still very niche, but it's growing and the protocol is so simple that I'm writing software for it, specifically a multi platform browser (more like a viewer?)


You can already achieve all this. Either turn of JS in your browser, or use extensions such as NoScript.


Actually I browse with JS off by default and whitelist stuff, ironic since I am a web dev (or maybe the fact I know how shit web tech is is why I think documents should be documents , imagine I want to show you my blog but I make an Unreal Engine 5 app because I want some cool effects and I also want to learn this shiny tool and the marketing team wants to do some shitty things too)


You can technically achieve this, but you get a degraded experience. Most sites don't test for JS being turned off, and it's not rare to only get a blank page when viewing a site in that way.

What OP wishes for is rather an experience that decidedly doesn't use JS, similar to Google's AMP or Gemini. A subset of HTML that makes publishing possible, without moving parts.


Most (if not all) browsers allow you to disable JS, so that seems like the perfect preference for you. I know it works on Chrome and Firefox on desktop (I use the NoScript extension myself, that blocks JS by default but allows you to enable it per-site), I can imagine it works the same on smartphones as well.


I /think/ what they're asking for is a world where turning JS off is actually a real option. Currently the web essentially does not work in such a case, so while it technically exists the option to disable JS isn't actually an real option.


So what they want isn't the power to turn off JS in their own browser, but the power to turn it off in other people's browsers (at least the browsers of people developing websites).

More seriously, I guess they might want a way of avoiding sites that don't have a good no-script experience. Perhaps if there were a trustworthy way to vote on that (or detect it automatically), someone could offer an extension which puts scary red boxes around hyperlinks which point to such sites.


just seems more antiweb from Apple. they love to ruin the web and make it harder to avoid their walled app garden....it's a money ploy to fight web apps and make their little devices even less useful.


Too bad that Google does not offer this same “Lockdown Mode” as Apple does.

Instead, they (Google Play Store) removed our ability to see what “app privileges” that an app would required BEFORE we do the installation step from the Google Play Store. What we got instead was an obfuscated “Data Security” section that is pretty much always “blank”.

My flashlight app should not require GAZILLION app privilegeS nor hide that fact before I can determine whether I can safely install it, much like Apple App Store can do by doing the CRUCIAL pre-reveal of any needed app privilege(s) … for our leisure perusual and applying any applicable but personalize privacy requirement BEFORE we do the app install.


Google removed the install-time permissions dialog because they replaced it with runtime permissions. This makes sense - some users wants PayPal or WhatsApp to access their contact list, and others won't. It also fixes "permission blindness", where users blindly accept a long list of permissions because they need the app, or just stop caring because it's too much to comprehend all at once.

Obviously, this isn't perfect, especially since Google removed the internet permission and allowed all apps to access it. Allowing advanced users like us to toggle off internet access in the "App info" permission page would be a good compromise, and I hope and Android team does so to match Apple on their security efforts.


It's taken a decade, but it's pretty much moved back to the permission model that j2me had, which iOS and Android deliberately removed & sold as better UX. Seems like the original devs of j2me knew what they were doing - only the joe public's weren't ready for permission popups then like they are now. :sigh:


Fixes “permission blindness”? So, the current form of Google Play (app) Store “Data Security” section of each app being shown as “(blank)” is surely yet another form of “permission blindness”.

Google Play Store being proactive in protecting these end-users from their own form of stupidity (or “permission blindness”, as you have eloquently pointed out) is just opening themselves to potential liability ramifications instead of deferring to end-user’s responsibility of maintaining their own privacy.

I think that the term “permission blindess” is better referred to as an app having zero privilege.

And “App Privileges” should have referred to runtime permissions and should have been displayed in the first place at the Google Play Store instead of install-time privileges.


Your apps have no permissions until you allow them. If you install spyware and it wants all your contacts and files it has to ask. You simply select "no" and then remove it.

Apps would force you to consent to eg contact permissions "in case you want to share something to a contact" and then harvest all your contacts. Apps can no longer use that pretense.


you get prompted for such granularity of privacy AFTER it gets installed but not before you could preview such app settings.


Yes. It has no access after being installed and before prompting. What exactly is the issue?


“Permission blindness” still remains at the stage BEFORE app installation.

Perhaps we can call what it is now as “trust me first, then we will let you verify”.

When it should be “trust but verify first”.


You should be able to review the list of required permissions before installing the app anyway.

I find it frustrating when I install a simple app and it asks me for every permission possible. Waste of time.


Google hiding information about apps in the app store is a big problem - but its not as big a problem as not having a Little Snitch equivalent built into Android. This alone is a reason for real capital to be spent on startups in the alt-android space. Imagine a company that lets you use your current Samsung or Google or Sony or ASUS or whatever flagship phone, but with a truly open-source fork of Android with a Little Snitch built in, and security updates guaranteed for as long as you stay current with your subscription, which is like $5/mo. (Maybe that's too low). Maybe you could even wipe your device and mail it in to have the software installed if you can't be bothered to do it yourself. Or maybe even a partnership with a phone repair chain. (And if you don't want to pay the fee you can always install updates yourself manually, from source.)


> Imagine a company that lets you use your current Samsung or Google or Sony or ASUS or whatever flagship phone, but with a truly open-source fork of Android with a Little Snitch built in, and security updates guaranteed

You describe the direction CalyxOS / DivestOS are going. And of course, there's the Pixel phones on GrapheneOS which arguably is more security-focused.


>CalyxOS

I just read their homepage, and they don't have Google Play support. The requirement to run Google Play Services to access and run apps represents a serious anti-trust concern to me (and to the DoJ under any administration, I would imagine). Perhaps more importantly, I see no mention of any facility for network monitoring.

>DivestOS

Hadn't heard of this LineageOS fork, thanks. TBH I can't really tell how it differs from either Calyx or Divest. None of these tools have the top-line features I mentioned.


> Perhaps more importantly, I see no mention of any facility for network monitoring.

CalyxOS intends to build a comprehensive netmon: https://gitlab.com/CalyxOS/calyxos/-/issues/349

Right now, they've got an ebpf-based firewall: https://calyxos.org/docs/tech/datura-details/

> TBH I can't really tell how it differs from either Calyx or Divest.

The lead developer is pretty active on github and fdroid forums: https://forum.f-droid.org/t/10105


Whilst not quite the same, Google does offer the Advanced Protection Program for accounts.

https://landing.google.com/advancedprotection/


I’ve been using that for years but was wondering whether the documentation is current about Chrome - they offer things like disabling the JIT (nearly half of Chrome’s exploits last year) as a group policy option on Windows, for example, but it doesn’t appear that APP does anything for Chrome users other than mandatory Safe Browsing.


> they (Google Play Store) removed our ability to see what “app privileges” that an app would required

Don't use Google Play Store, then. There are other APK repositories.


If Apple was really serious about this, they would add one more feature to Lockdown mode: To delete and scrub permanently and definitively all your iCloud data.

You can close the proverbially "front door" by enabling "Lockdown mode" but if that same government sends a subpoena to Apple, then they will just give them a copy of all your iCloud private data.


Most iCloud data is end-to-end encrypted; Apple doesn't have direct access to your data. In the end they do own the OS and could potentially backdoor your device, but if you're worried about that... well, Lockdown Mode is moot at that point.

Worth noting Apple previously refused an FBI order to do just that. https://en.wikipedia.org/wiki/FBI–Apple_encryption_dispute


Apple refused an FBI order to decrypt a phone; however they allow the FBI to access iCloud data all the time. And iMessage is not end-to-end encrypted in iCloud at the explicit request of the FBI. https://www.reuters.com/article/us-apple-fbi-icloud-exclusiv...


Yes but many things on iCloud are E2E encrypted.

https://support.apple.com/en-us/HT202303


Which makes it all the more ridiculous that sensitive things like messages, photos, contacts, and notes aren't, even as an option. Clearly the technical ability is there.


> Most iCloud data is end-to-end encrypted; Apple doesn't have direct access to your data.

Depends what you think of as ‘most’ really, things that don’t have end-to-end includes photos, icloud drive files, notes and backups.

https://support.apple.com/en-us/HT202303


Secure notes are end to end encrypted [1]

[1] https://support.apple.com/en-gb/guide/security/sec1782bcab1/...


So you have a passphrase protecting the note.

On a mobile device chances are your passphrase is rather weak because it's tedious to enter.


If you care about your privacy don't upload your private data to ANY cloud service.

Even if iCloud was encrypted they still run on third party cloud providers who nobody knows what relationship they have with governments. Many types of encryption are breakable if you have effectively unlimited resources.


I've been using iPhones since the first iPhone. I don't sync any relevant stuff to iCloud. However, during previous iOS updates this sync turned itself back on multiple times (more than three times for sure).

It hasn't happened for a while but whenever there's an iOS update it's advisable to check your iCloud settings immediately afterwards and if they changed, you change them back and pray that your important data hasn't been sent to the cloud in the meantime.


Nobody who is at risk for this is doing iCloud backups. That's something you can already turn off.


Their conversation partners are. iCloud Backup is a backdoor in iMessage's end to end encryption preserved explicitly at the behest of the FBI.


I'd love to see evidence of this.


GP is partially right:

https://www.reuters.com/article/us-apple-fbi-icloud-exclusiv...

According to Reuters sources, Apple abandoned plans to offer iCloud backup encryption, out of fear of government retaliation or even spawning new anti-encryption legislation.

On the other hand, GP is responding to:

> Nobody who is at risk for this is doing iCloud backups. That's something you can already turn off.

And indeed, if you turn off iCloud backups, there is no "backdoor" into iMessage. You can also set up your phone to do encrypted backups locally to your laptop, if you want that instead.


"For Messages in iCloud, if you have iCloud Backup turned on, your backup includes a copy of the key protecting your messages"

https://support.apple.com/en-us/HT202303

Yes, that really does mean that Apple can decrypt your messages. In fact, Apple does it this way at the explicit request of the FBI, as reported by Reuters. https://www.reuters.com/article/us-apple-fbi-icloud-exclusiv...

And look at all the other potentially sensitive data that is not end-to-end encrypted in the backups. Photos, notes, reminders, calendars, the list goes on.


Yes, that really does mean that Apple can decrypt your messages.

I don’t think so:

    Apple doesn’t log the contents of
    messages or attachments, which are protected
    by end-to-end encryption so no one but
    the sender and receiver can access them.
    Apple can’t decrypt the data.

    When a user turns on iMessage on a device,
    the device generates encryption and signing
    pairs of keys for use with the service. For
    encryption, there is an encryption RSA
    1280-bit key as well as an encryption EC
    256-bit key on the NIST P-256 curve. For
    signatures, Elliptic Curve Digital Signature
    Algorithm (ECDSA) 256-bit signing keys are
    used. The private keys are saved in the
    device’s keychain and only available after
    first unlock. The public keys are sent to
    Apple Identity Service (IDS), where they are
    associated with the user’s phone number or
    email address, along with the device’s APNs
    address.
From iMessage Security Overview--https://support.apple.com/guide/security/imessage-security-o...


But you can backup your keychain on iCloud


But the keychain is one-way encrypted and Apple doesn't have the key to decrypt it.


It's not something that has evidence - what they mean is that even if you have iCloud backups disabled, everyone you talk to might not. The point of e2ee is that both ends must have it encrypted - not just you and the server, but more abstractly, the communication partners.


That is a novel and quite broad interpretation of E2EE. In typical E2EE only endpoints of a (logical) communication channel can decrypt messages on that channel. But E2EE does not say anything about what an endpoint can do with those messages once they decrypted them -- they could print them at the public library and leave them there, they can forward them to the FBI, they can post them on reddit, etc.

If you do not trust your communication partner to safeguard your messages, E2EE will not help you at all.


The point is that many people have iCloud Backups enabled without any awareness whatsoever of the implications, as iCloud Backups are opt-out and there is zero disclosure within the OS (only an Apple Support webpage nobody will visit).

It leads to E2E being systemically weakened, since most of your iMessage conversations will get immediately scooped up by Apple and alpbabet agencies, dragnet-style.


There is no evidence that the messages are being collected dragnet-style. FAA702 is targeted.


I understand that, I didn't mean the concept of e2ee requires the endpoints to never share it at all. What I meant was, commonly people will disable iCloud backups hoping to regain some privacy, but it does nothing because most of your communication partners use iCloud backups. Just like people who switch to eg. Protonmail - if you only ever talk to GMail users, it doesn't really give you much extra privacy.


You can already turn off iCloud features?


Apple's been making it real difficult to pick Android lately. Only thing Android still has going for it is the ability to flash custom ROMs, eg CalyxOS or Graphene.


On Android I can use a firewall to block network access per app. on iOS that is not possible.

My password manager app might be bought out and exfiltrate all my credentials, or any of the linked libraries it uses.


> My password manager app might be bought out and exfiltrate all my credentials

This is less likely if you use Apple Keychain for your passwords. lock-in intensifies


Apple Keychain requires iCloud. Most of iCloud is not end to end encrypted.


But the keychain is, of course.


My point is that if you care about e2e crypto and privacy, you already have iCloud turned off in full, including the e2e bits, because it's a privacy minefield.


I explored installing a custom ROM on my android phone, but ended up questioning the utility of them. There appears to be many banking apps, random apps (McDonalds??) and others that will not work if the device is running a custom ROM.

That makes my phone useless to me.

Our only hope is a proper Linux phone with an Android emulation layer


You can get around that by spoofing safteynet stuff using Magisk. But yeah, it is a few more hoops to jump through and you need to be rooted which is itself not great for security.


CalyxOS passes SafetyNet on my Pixel phone, at least for now


Better security, more features, more privacy, and more user control in general are significant reasons to choose Android.


Compare the actions of Google versus the actions of Apple and it's real difficult to think Google has your privacy in mind


- you can't install apps without an apple account(making the phone useless really) - you can't download apps from outside the app store - there is no security enhanced or de-appled version if IOS, while on android GrapheneOS and CalyxOS exist - you are limited to Safari as a browser engine(no extensions).



Compare the actual features of Android vs. the actual features (instead of the marketing) of iOS, and it's clear that Apple doesn't care about user privacy. With Android, you get to choose which if any Google services to use. On iOS, you can't run any apps without telling Apple which ones, you can't get your location without also sending your location to Apple, and you can't practically run your own apps without fully deanonymizing yourself with banking details.


Android has a wide plethora of devices, Apple can't make hardware catering to everyone's needs.


That is not an Android advantage. Tightly controlled hardware makes it so much easier to control software. You ever built an app for Android? It sucks


Maybe they changed this lately, but can you copy files through USB to an iPhone?


I can also use stuff like Tasker on Android.


What? No, not even close. - no way to use the phone without an account - no way to install apps from outside the store - no browsers other than Safari reskins.

These are all fixed by the DMA, but it will take a lot of time for things to mature, however other issues persist

- no way to put apps on the bottom of the screen - the FOSS scene on IOS basically doesn't exist, while on android there is a whole app store for it(https://f-droid.org) this is a big point for me. - no way to duplicate apps - no separate work profile - limited file mangament - no notification chat bubbles(a pretty good feature on android 11/10?+ - no advanced apps like local terminal emulators, virtual firewalls or virtual tracker blockers(partially because the FOSS community rightly doesn't care about iOS) - non encrypted iCloud backups(basically a backdoor into WhatsApp) or any important file medium - CSAM Scanning inbound

And many other issues, iOS is hardly making Android hard to choose, its still locked down prison, its just a bit nicer inside now.

Since I value my privacy and like FOSS iOS is even more useless for me.


Extreme? This sounds like the way I have my computing environment configured by default (to the extent that I'm able to do so with browser extensions and whatnot).


Same. Its too bad general browsing is nearly unusable with JS turned off.


This isn't even turning off JS completely, merely certain JIT features.


> Most message attachment types other than images are blocked.

Who wants to bet that this reflects minimum requirements dictated for user experience, rather than reflecting what Apple are actually securing today ?

The correct model here, the one that would actually defeat these adversaries, is to start with what you can actually secure and expand from there, prioritising customer needs. This delivers security improvements for all customers, but it makes the calculus simple for Lockdown customers, whatever Lockdown allows will be OK.

Suppose today Apple has a working safe BMP reader, and a working safe WAV reader, but they're still using their ratty JPEG and MP3 implementations. As described, this feature says you can receive a JPEG attachment (which takes over your phone and results in your cousin who remains in the country being identified as a contact and imprisoned) but you can't listen to the WAV file an informant sent you because that's "dangerous"...


I find is absolutely hilarious that they've kept the images in Messages while one of Pegasus attack vector was sending a PSD file as a *.gif, which crashed Messages parser.

Apple is over confident in it ability.

https://arstechnica.com/information-technology/2021/09/apple...

People who need this have already a dumb phone, using this Lockdown mode is an unnecessary gamble on they part.


Yeah, apple really should dumb down that parser to just “modern” jpg/png/webp for their entire application stack. bmps and gifs shouldn’t still be used. And photoshop is a bit proprietary for apple to be rendering their files within iMessage


This seems to mimic, or at least rival, Google's Advanced Protection Program which has been running for a few years to offer similar protections to Google/Android users.

My concern about enabling this would be that I'm unsure how much this puts barriers in place to prevent the owner of an account regaining access should it be stolen by a threat actor (i.e. could this backfire on the account owner?).

It's still unclear to me how much Apple really protects against (for example) sim swaps to take over an iCloud account - and the documentation around when they'll truly insist on having something like a Recovery Key if it's enabled is sparse. It almost reads as if the right amount of begging will socially engineer access to a locked iCloud account by a threat actor with the right personal information to hand, which if coupled with Lockdown mode, seems pretty dangerous to the true account holder.


This lockdown mode looks like what ought to be default security behavior.


It slightly degrades some experiences, so I see why it's disabled by default. Disabling JIT JavaScript is going to make web browsing more painful. And incoming friend requests are useful because it simplifies things when two people are adding each other to their phones - one sends a request and the other reciprocates.


> Disabling JIT JavaScript

With a bit of luck, this will cause site operators to reduce their usage of unnecessary JS, so maybe this has positive impacts :)


> It slightly degrades some experiences, so I see why it's disabled by default.

My sense is that the functionality to provide those experiences resulted in a decrease in user security and privacy when they were introduced -- and that those risks were widely-discussed and well-understood.

It's weird (although not unexpected) to see the reversal of them touted as a selling point.



Most exploits are related to dealing with multimedia contents. By protecting against these exploits the attack surface gets a lot smaller.


since it has been memory holed.

apple is/was a part of prism (https://www.theguardian.com/world/2013/jun/06/us-tech-giants...)

"An Apple spokesman said it had "never heard" of Prism."


If you are "a target" and going to take measures of basically disabling everything on your iPhone, wouldn't it just make sense to get a burner dumb phone?

Hasn't this been happening for years (drug dealers, anonymous, etc..)?


Think more about journalist. You need slack to talk to the rest of the team. You need WhatsApp to communicate with sources and locals in most of the world that’s not the US. Your iPhone is an important tool for your work in general - a dumb phone that can only make real phone calls and sms is not particularly close.

Phone calls and sms are also completely unprotected as opposed to chat apps with e2e.


What then? Use SMS?


But then you’ll want lockdown mode (or something like it) on whatever device you use to browse the web.


Most of the features of this lockdown mode should be on by default.


Totally agree. I'm also concerned about the fine print, what Apple is not announcing - like, "Oh, we also updated our EULA to reflect that metadata from phones with 'lockdown mode' enabled will be forwarded to the FBI", something like that.


ESPECIALLY the disabling of JavaScript, because … malicious JacaScript.


This does not seem to disable JS altogether, only JS JIT compilation. IIUC, JS will still be executed, although via an interpreter (which is safer) rather than via compiled machine code (which might be used to exploit memory safety bugs such as type confusion, somewhat frequent on the JS side).


which in my cybersecurity book is considered a “miss”.


FYI, if you mean that it should disable JS completely then you can already do that in Settings -> Safari.


When reading through this list at each feature I can't help but go "why isn't this in regular iOS?"


Which is exactly why it's optional. Plenty of other people, myself included, look at that list and would not want them all or would like to pick and choose which subsets are locked down.


They should give you a list and the toggle should give you the option "SECURE" or "INSECURE" because that's basically what this is.


Hardened devices only work if it's an all or nothing proposition.


Yeah pick and choose makes sense for sure. Apple isn't exactly the king of choice unfortunately.


[Disclaimer: I worked in Apple Red Team]

What if this isn’t a good news for 99% of Apple users?

That’s obviously an amazing measure for the 1% high targets out there.

But what about the other 99%? Does that create an incentive for Apple to strengthen Lockdown Mode security to the detriment of the regular mode (should we call it Unsafe Mode)?

I’m afraid that this architecture will make it harder to prioritize security features or fixes for the 99% users. Developers bandwidth is limited, they can’t fix all bugs. Hence if you have to choose between one bug impacting the 1% most important users (from a security standpoint) versus one bug impacting the 99% others, which would you choose?

Would such an architecture have led to the emergence of Blastdoor[1] - which attempts at mitigating iMessage attachement exploits, but is now useless in Lockdown mode?

My hope here is that by reducing attack surface, Lockdown mode will make exploits much easier to fix (as they’ll target a limited area), allowing to strengthen the system core while freeing bandwidth to implement longer term, Blastdoor like mitigations.

[1] https://googleprojectzero.blogspot.com/2021/01/a-look-at-ime...


What if there is a little device that acts like network firewall and router appliances but somehow the phone proxies all connectivity via it. Something to carry around that shows ingress and egress connections, calls and anything in between. You can either set an allowed or blocked list, detects cell connection mitm attacks and spikes in traffic (to detect leaks). Mobile phones are like desktop computers and will always have security issues. It only makes sense to firewall them.


TLS and certificate pinning makes this a problem. Technically certificates don't have to be pinned, but if they weren't then people would use this to defeat "growth & engagement" and block analytics, ads, etc (or worse, reverse-engineer the API to make a third-party client) and we obviously can't have that.


Why not on the same device? Have a separate small simple SoC completely segregated from everything else, except shared battery, with 2 NICs and a physical switch to swap between using the firewall interface and the regular phone. Although this may make more sense for a regular computer plus router, with a cell phone there's multiple radios, not just a single simple IP connection...


Issue is that we would have to get device makers to buy into it, and also trust them that they show us everything. Also we wouldn't be able to retrofit existing devices. Most people dont like tinkering with things. A universal device small enough to fit in your pocket, with a nice little display or a usb connector to download data to a laptop and configure rules, is more desirable imo.


Like your own personal stingray


Had to look it up. I guess the question is how to make sure it cant be abused by capturing data from random nearby phones. In that case we’d end up worse off.


First, lets talk in the foggy dreamland of this article.

I can't imagine the threats security researchers deal with every day. And their innovative solutions. Extracting live code from samples to inject in other malware. Wow, so cutting-edge. It's wonderful to talk about, no stress there, no drama. We don't want wear and tear on our machines.

Like the article states, we're spending millions and billions on these problems.

And lockdowns are an innovative approach. I've been thankful for device lockout in the field before. It's saved my bacon. Captures the favorite philosophy of strong regulatory control. Nice, has network effects for other political goals. Very cool.

Better than Kevorkianing or Bricking a machine in the field.

Oh God, Oh God, Oh God. Sorry for that narrative-scape shattering. Dementia is a serious issue. Ok, back to sanity.

Some really fucking smart people showed me a study on evolutionary computation and diversity in investigator-guided processes. Hand-edited synthetic organisms may be less evolutionary successful than purely evolved ones.

Like my storytelling, right? It's a fucking hot mess that isn't excersing my audiences mind as much as a more diverse author population.

Oh God, there I am breaking down story-wise. Back to stability. We have political goals like reduction of 99% of security threats. And the perfect is the enemy of the good, right?

I'm so sorry for my slips there, I know you lost time with loved ones and reading other comments talking about this in a more professional tone that captures the point.

In closing, I'd like to thank the sponsors who kept me fed for years.


> Wired connections with a computer or accessory are blocked when iPhone is locked.

Damn... if this was something that could be enabled by typing the pin in wrong, it would be the death of modern phone forensics. Actually, I would rather this be the default after a device is powered on... let me "restart in non-safe mode" when I need it.


> Web browsing: Certain complex web technologies, like just-in-time (JIT) JavaScript compilation, are disabled unless the user excludes a trusted site from Lockdown Mode.

That's very cool actually. You can keep JS enabled but choose to make it run more slowly in exchange for better sandboxing


I originally understood this to mean JS was disabled entirely in safari when enabled unless a site is allowlisted. Does this mean the web will run JS “normally” but slower? Does the speed of modern phones mean a slower style of JS processing might be less discernible?


That's my interpretation. Modern JS engines have multiple tiers of optimization, which they apply in different ways based on how "hot" a piece of JavaScript is. JIT is the highest level of optimization, but also means generating and executing native code on the fly, which I assume leaves the door open for worse exploits if there's a bug in the engine. This is in contrast with bytecode interpretation, which is slower but available.


Could a security expert enlighten me: is Windows more secure today than macOS, if we purely take OS-level and hardware-level security measures and ignore subjective factors? (like marketshare, attractiveness of targets, etc.)

Windows has all sorts of buzzwordy-sounding security features: Microsoft Defender Application Guard (Hyper-V for untrusted websites & Office files), kernel virtualization-based security (VBS), Code Integrity Guard, Arbitrary Code Guard, Control Flow Guard, and Hardware-enforced Stack Protection.

It's extremely hard to compare the two on a deep technical level (beyond "modern OS's are safe, install updates, you'll be fine") without having deep security experience. Any professional insights?


There is no meaningful difference if you are a modestly attractive target. The prevailing level of security is such that a single technically competent individual with a year of time can completely breach the commercial IT systems of any Fortune 500 company in the world and steal essentially all of their internal documents and IP and materially disrupt their operations. That is the maximum level of security in commercial IT systems.

So, if you have nothing worth more than ~$1M and indefinite disruption of your systems is worth less than ~$1M, then there might be a meaningful difference. However, basically every business is beyond those levels, so quantifying the differences in a professional context is kind of like discussing whether a tshirt or a single piece of paper provides more protection against a gun.


This is great. Here in Australia, when you pass through the border, the goons can ask you for your phone, computer, devices etc. without a warrant. They’re not allowed to compel you to hand over passwords, PINs or have you unlock it for them (without a warrant) though, but apparently they’ll often imply that you have to, and if you don’t they can confiscate the devices for some time.

This mode sounds excellent, because all they can do without a warrant is try and attack it with a Celebrite or Graykey device, so having the extra protection from physical connection and other attacks sounds awesome.

I expect to always enable this while I’m doing international travel anywhere.


I was threatened with 10 years imprisonment if I did not hand over my passcode at SYD airport in July 2018. I asked to contact a lawyer and I was told I have no right to do so. I am an Australian citizen.


>Web browsing: Certain complex web technologies, like just-in-time (JIT) JavaScript compilation, are disabled unless the user excludes a trusted site from Lockdown Mode.

This should be ON by default. It would force webdevs to write efficient websites.


They’d just work out how to write web apps entirely in CSS instead somehow.


Does this offer any protection after you are already pwned? Is the expectation that you have it permanently on if you are a high value target or do you turn it on temporarily before clicking on a link for example?


Don’t know enough about iOS to say for sure about persistence, but recent Pegasus (NSO Group spyware) versions don’t bother[1], instead repeatedly exploiting bugs starting with “features” like background Messages attachment parsing.

Those are the kind of threats Lockdown Mode finally acknowledges — targets (well IMO everyone) would need it permanently enabled.

Otherwise the temporary protection before clicking a link can be had today in other ways, like disabling Settings > Safari > Advanced > JavaScript.

[1] Lack of persistence likely an attempt at making it harder to analyze: https://www.amnesty.org/en/latest/research/2021/07/forensic-...


If you have already been pwned, the OS is compromised so it clearly is not able to retroactively undo that - any checkbox, option or whatever can just be turned into a no op that lies.


If you're already pwned to the point where they have kernel-level access and can bypass code signature enforcement, all bets are off. Even if lockdown mode interfered with their activity, at this point nothing prevents them from modifying the Settings app to not really enable lockdown mode even if you request it to.


If you're going to run a crippled-ass phone to protect yourself, because the regular phone is so fucking insecure, why even bother with a smartphone? They'll just find an exploit in something that the "security mode" hasn't disabled.


putting rich media like images, GIFs, video etc embedded inline in chat applications presents a huge attack surface.

i'm even suspicious that signal does it.

if you really want to design a secure messaging system it needs to handle text ONLY.


Text rendering is more complex than decoding a PNG.


This is mostly great news. Then you scroll down a bit and see this eye-opening 2nd part:

“Apple is also making a $10 million grant […] to the Dignity and Justice Fund established and advised by the Ford Foundation - a private foundation dedicated to advancing equity worldwide and designed to pool philanthropic resources to advance social justice globally.“

So Apple is releasing a great new hardened security mode in iOS, AND… they’re donating money to collectivist activism? What a bizarre combination. One step forward, two steps back.


This feature is really fantastic, and it re-affirms my commitment to using Apple devices due to security in preference over Android. The only thing I could see that would be a superior alternative could perhaps be something like Graphene. Already today I locally set up a profile via Configurator in order to ensure that my phone can't be hijacked by some local attacks, the work that is happening Lockdown is even better and I'll be enabling this as soon as it becomes available to me.


How do you reconcile that with their move into CSAM scanning?


Personally, one of the major things that lead me to start investigating Graphene and other alternatives was the on-device scanning. Fortunately, it seems Apple has backed away from this position (at least for now), and unfortunately we live in a world where if you need a mobile device there's a dichotomy. Given that, I think sticking to Apple is still the best choice for me right now, and aligns most strongly with my threat model.

I travel internationally a lot (or did before the pandemic) and my primary concern is when crossing borders. I always power off my devices to prevent warm-boot attacks, and take other precautions, this lockdown mode would be a big win in this case. Even with the other protections Graphene offers over regular AOSP, I am concerned that Android doesn't really provide meaningful ways to prevent attacks where the adversary has physical access.


I think this is a great feature and a huge step in the right direction.

However, will this comply with the new EU Digital Markets Act (DMA), which provides a general interoperability for messaging apps (for big players, with Apple and iMessage being certainly one of them)? Certainly, foreign services – or at least parts of their features and APIs – had to be disabled in lock-down mode. (Will it help to any degree that Apple is drastically limiting their own service?)


Cool, is this like GrapheneOS just without being open and free?


> Messages: Most message attachment types other than images are blocked. Some features, like link previews, are disabled.

NSO Group's zero click exploit for iphones involved images. specifically using the PNG compression algorithm to use the logic gate sequence of the CPU to compile and execute a new process that allowed for escape

If that's the level of sophistication this is to guard against, then it seems like it should include a block of images


They should offer "US President mode". Didn't Obama have to have a special version of the Blackberry developed for him, while he was president?


Yeah, in which Twitter is also locked down.


I wonder if this mode would be helpful to protect myself if US border control forces me to unlock my phone so they can make a copy of all of my phone contents.


Can you be forced to unlock your phone at the border? I thought you couldn't. (I don't actually know.)

BTW bringing up the power off UI on iPhone (holding power and up buttons at the same time) disables FaceID/TouchID until a passcode is entered.


If you are a US Citizen or Permanent Resident, Border Patrol cannot prevent you from entering the United States. They can, however, detain you for up to 72 hours and confiscate the locked device if they have "reasonable suspicion". The confiscated property will be returned eventually.

https://www.cbp.gov/sites/default/files/documents/inspection...

If you are not a US citizen, refusal to unlock a phone and allow inspection, inclusive of allowing access to social media and corporate apps, will probably result in denied entry. They also have the right to detain you until indefinitely until you unlock the phone if they have "reasonable suspicion", but requires a court order within 72 hours.

Most foreign counties have similar rules in place for residents and non-residents.


They don't usually return the devices they steal, and most people travel with a total device value lower than the cost of an attorney and lawsuit to force the return.


They can search your phone at the US border. https://www.theverge.com/2021/2/10/22276183/us-appeals-court...


The sterile area between the gate and the border control is treated as international waters/lands, which sounds fine, and IIUC there is the logic that laws don't apply there so you can be forced-forced anything free from constitutional protections. Not sure if that actually works though.


This is completely incorrect. Here's the actual law

https://www.cbp.gov/sites/default/files/documents/inspection...


Pressing it 5 times does the same (and starts an emergency call countdown if you have that enabled). Also, removing the SIM also locks it out.


You can also say 'hey siri, whose phone is this?'


You can be forced to unlock it with biometrics, but not a password/code.

They also get to steal it and keep it if they want.


It would be a good idea to enable this before going though any border controls. Doubly so for countries that require apps to be installed before entry/upon entry/after entry.

ArriveCAN (Canada), Mobile Passport Control (USA), WeChat (China), and other mandatory government apps would be perfect vectors to stage highly targeted attacks.


If someone has your unlocked phone, they can look at the screen.


I'm excited about this mode for traveling outside the US, where other governments seem to be backsliding against privacy much more quickly


Great, now give me the options to fully open it up.


Is the apple bounty program still terrible in terms of payout and length of time to approval?

I can’t see many people submitting bounty reports if it’s too much of hassle or not worth the effort.

Since the apple ecosystem is mostly proprietary, it’s hard to gauge as individuals if this just provides a false sense of security or not against “state actors”.


Sounds like a plan to make iOS the default for highly-placed government employees. Maybe that's already the case, but I thought I remembered that Obama had to have 2 phones, and the "secure" one wasn't an iPhone. Anyone have any more knowledge about this?


I'm guessing it isn't, if only because this feature completely disables MDM (which you'd need in government or business to do things like remote wipes or passcode policies). It looks to be designed for people that are possible targets to use on their personal phone, which shouldn't have work data on it.

(Of course, they could make some new MDM policies to individually turn these features on. You can already block external devices with MDM, and you can completely disable FaceTime/iMessage/iCloud. It wouldn't be much of a jump to add the more granular protections this has.)


I think you’ve misread this announcement: it doesn’t appear that MDM is disabled. It merely looks like you cannot change MDM settings, including enrolling, while this feature is active.


At least at the start of the Obama Administration, he was known to be hooked on his Blackberry [0], and I know RIM did a lot of work to provide secured devices to government officials. I don't know what government officials are using since RIM went under though.

[0] https://www.nbcnews.com/id/wbna28780205


The secure one was a BlackBerry for a while. https://www.theverge.com/2016/6/11/11910306/obama-upgrades-f...


Computer security is notoriously difficult, but at the same time, none of this is magical, this is meticulous hard work, and with enough time, skills and money I don't see how you can't plug all the holes.

At least the remote attack surface does not seem to be that huge...


That's the thing, if you think your device is compromised, don't use it. This is dangerous as it's a bandage and most likely allows surveillance that's "pre-approved" or is carrier based, probably even baseband modem based.


I would say just use GrapheneOS, Ana actual security and privacy focused OS. And Qubes on PC.



Defense in depth is good. Apple is finally getting over their faith in their sandbox.


Apple is not stopping state-sponsored anything. They do not have the expertise nor willing to invest enough to stop it. And they also turn everything over they can at a local-law enforcement request, because they have to.


It's good I guess, but I will not convince myself that a button saying "Lockdown mode" will casually side-step the entire legal and surveillance machinery built up in the U.S.


If I could just firewall my phone like Little Snitch.

But apple doesn't allow this.


Firewalls like Little Snitch may not be enough against actors like NSO (that exploit unknown zero-days), tbh. The mechanisms to enhance protection does need to come from the vendor (Apple). This lockdown mode, for all its present shortcomings, is moving the needle in the right direction, imo.


with something like little snitch, you could just prevent or tune internet access on an app-by-app basis. including system apps.

Apple - in a self-serving way - does not provide good permission for network access.

I think it should say "allow this app to access the internet?"

There is also deep linking, the ibeacon stuff, the find my, the wifi access point mapping, and more


Inflation, pollution, censorship, global warming...

Hey no, don't look at that, look over here instead. We're playing ratfuck with the abortion laws.

Magicians call that "misdirection".


I see they're running the reality distortion field at full power.

This is a load of bullshit and marketing hype. They are letting you turn off features for security reasons, i.e. what basically every OS has let you do, and what every half-competent IT department has been doing, for decades. In fact, iOS was an outlier in how unconfigurable it was, and with the pitiful MDM options not letting you turn off many of these features that are constant sources of vulnerabilities and social engineering.

Nothing that novel here other than the framing and cybersecurity marketing bullshit about Nation State Actors and "mercenaries."


Of course Apple is going to put a marketing spin on everything they do - that is a given. Does that somehow invalidate the work itself?

Why do you find it necessary to reframe the introduction of these features as a load of bullshit?

Are you arguing that these features are bad or not useful?

Or are you just saying that “it’s about time”? And if so, why not just focus on the part where Apple is doing a thing that needed to be done?

The undertones in your comment feel a bit unnecessary.


Because it's being made to sound like something it's not. The comments are full of people fawning over how innovative and groundbreaking this is. Just trying to offer a dose of bitter reality to bring people back down to earth.


To what end? What new insight is gained from such a reframing?

I personally don’t think the individual features are as interesting as the overall framing and the fact that Apple is publicly announcing their intentions. The feature set will doubtless change over time - such is the nature of any software endeavor - but starting that journey is the interesting part.

Getting stuck on “but it’s just xyz dumb feature…” or “but they should have done x long ago”, etc. just obscure the more interesting fact that they’re explicitly embarking on this path to begin with.


Have you looked at the release notes of Android and iOS for the last 10 years? More than half of the security issues have to do with multimedia!


Is there a reason why I wouldn’t use this as default?

I hate most iMessage functionality and I will be happy to get rid of it for security. The rest also seems reasonable?


ironically if this is successful we likely won't know how deep the hardening goes, unless security researchers who have legit access release something?

but i'd love to know the series of changes that goes into something like this if its not just hype, potentially unpicking a giant complex system from the bottom up to remove attack surface.. interesting challenge!


> Messages: ... Some features, like link previews, are disabled.

I've been wanting to disable link previews for YEARS!! Not for security, but to keep those corporate advertisements (aka previews) out of the conversations I have with my friends and family.

It feels super disingenuous when I type out an articulate, heartfelt, personal message to my loved one, character by character, anticipate their reaction reading it, and then hit send — only to find the URLs expanded 400 pixels into corporate advertisements designed by the bonehead SEO jerks who care about clickbaiting over content.


Will this be available to Chinese residents? Huge if so.


I think this is very huge for the infosec game, but I think the `unless the user excludes a trusted site from Lockdown Mode.` is still a bad take.


Very cool! I wonder if this, combined with some sandboxing for apps' unsafe code, could make a more secure OS than any previous mainstream ones.


If Apple co-ordinated with Signal, Slack, and Symphony to release lockdown compliant modes for their products, this mode could be my daily driver.


Apple cannot even in theory protect you from spyware, because Apple's OS and apps _are_ spyware - as Apple (routinely? occasionally?) collects your personal data for the US government's NSA and passes it to them (Snowden revelations: https://www.theguardian.com/world/interactive/2013/nov/01/sn...)


This might get downvoted but it's actually true. If you're logged into iCloud, even with all features disabled, things like your call history and email recipient history (regardless of whether you're using iCloud Mail) are uploaded for example.


I'm rather baffled by how all of that can just go over people's heads, and they go back to debating whether Apple is mindful enough of their security and privacy or not.


I would love to see a "child" lockdown mode. Maybe that will be a good option to make the iphone/ipad more young children safe?


This sounds like parental controls?


What they think will happen: users activate Lockdown Mode to protect themselves.

What actually happens: criminals activate Lockdown Mode to evade law enforcement.


Lockdown mode is for preventing 0-days. Law enforcement does not burn 0-days on common criminals, they get a warrant and get into the device that way.


Congratulations Apple, you just invented BlackBerry.


Only yesterday I was wishing there was a Valet Mode for when I hand a phone over to Apple for servicing. Hopefully they will accept this...


You don't need to give them the PIN.


But how secure are iDevices peripherals, and RAM? I guess it's a start of a journey, but I don't see this does anything yet.


Peripherals are behind a DART.


Downside: if attackers can tell that you've enabled Lockdown Mode, then they know that you're likely a high-value target.


I can't wait for people to turn this on and file bug reports and followed by "but I didn't enable that!"


But should apple we liable when they, or any other organization making such claims, inevitably fail to protect their users?

I think their should.


How do you propose to do that without disincentivizing the addition of such features? Even NASA has software failures.


Everything else to the side, this is excellent marketing on the level of Tesla's "bioweapons filtering mode".


///// Re: Bounty

From press release, “Bounties are doubled for qualifying findings in Lockdown Mode, up to a maximum of $2,000,000 — the highest maximum bounty payout in the industry.“

Appears Apple is not aware there was a $10 million bounty [1] paid out; unless when they say “by industry” they mean phones, not bug bounties.

If Apple really believed it was secure, then even a $100 million bounty shouldn’t be a concern; 2 million, while clearly high, is no longer enough to pull in the best bounty hunters, in my opinion.

///// Re: Naming

Name conflicts with existing terms both Apple and consumers use. Naming should be unique so it’s possible to Google the unique name for this feature and only get valid search results.

///// Re: iCloud

While iMessage features are limited, it is neither blocked, nor is iCloud — and both are known to being vulnerable to nation state demands on Apple due to iCloud not being end-to-end encrypted.

///// Re: iCloud end-to-end encrypt

If Apple was serious about the topic, they would have already rolled out end-to-end encrypt for iCloud years ago.

///// Re: Targeting

If Apple is logging if this feature is on and sending it back to Apple, it will result in targeting from nation states even if this feature is “invincible” - which I have no reason it is; basically, nation states demand list of users subject to its jurisdiction.

///// Re: Off vs Locked

“Wired connections with a computer or accessory are blocked when iPhone is locked.” — Why is this not the default with an opt-in? Further, at the point you’re turning on this features, when locking the phone it should explicitly tell the user of the risk of locking vs turning the phone off. Lastly, when you turn an iPhone off, it should really be off if set to this mode; if it is, and activity is detected, likely good sign something is going on.

_______

[1] https://medium.com/immunefi/wormhole-uninitialized-proxy-bug...


The overlap of eth bug-bounty hunters and iOS bug-bounty hunters is 0.


You'd be surprised.


This seems rather extreme. I like it!


Can I turn these features on one by one by some other method? (self-managed MDM, or something else?)


Self-managed MDM is the way to go for most of them. I think the main one that can't be achieved thru MDM is the browser lockdown. MDM has a lot of other security policies available though.


Hey look, sys-admining your phone, but it's Apple style sys-admining, so it's ok.


I just ordered a pixel 6. Really hope Android/Google will have the same mode.


These two sound like good defaults for all iPhones.

> Messages: Most message attachment types other than images are blocked. Some features, like link previews, are disabled.

> Apple services: Incoming invitations and service requests, including FaceTime calls, are blocked if the user has not previously sent the initiator a call or request.


What does it even mean to be a state-level actor? For me this is the same kind of bullshit/PR language that is is used to sell so-called "military-grade" artefacts.

This is nonsense. Security breaches can be discovered and used by anyone with the right knowledge and skills. Geohot was not sponsored by the CIA or the FSB.


I think they're focusing on the notion of protecting against well-funded mercenary firms with the resources/time/ability/motivation to target specific individuals with specific exploits. I have a hard time believing that anyone would enable this Lockdown Mode _prior_ to being owned though.


> I have a hard time believing that anyone would enable this Lockdown Mode _prior_ to being owned though

I can imagine many use cases where they would e.g.

journalist enabling this before working on an article that was critical of a foreign government. Or any government contractor, NGO, embassy worker etc.


> Security breaches can be discovered and used by anyone with the right knowledge and skills

That's often not enough.

You need a lot of resources and most importantly prosecutorial immunity.


Why is that not enough? What do you mean? Because it would take too much time?


State-level is a label for groups that have resources and persistence and perhaps the technical acumen that is available to states.


I know what the label is, but from a strictly technical point of view, a breach is a breach, an exploit is an exploit, this is not like AI research where you need server farms.

From my own experience and from the postmortems I have had access to, most of the time this a work of patience and skills.

In my opinion, this is a way of saying to the public that their devices is secure against everything but state actors, this is simply not true.


This definitively signals that Apple devices are no longer secure, terrible move.


If you thought they had no holes before, you were mistaken.


TBH even 2m bounty on lockdown mode bypass seems really low


Wow I guess apple was really peeved about Pegasus.


I was wondering when a “hardened” option would come.


Does lockdown mode prevent updates from Apple?


could always just not use a smart phone


So can I still be tracked using SS7?


You could carry your phone in one of these if you’re worried about being tracked: https://slnt.com/products/faraday-cage-sleeves-for-phones

But practically speaking your debit or credit card does a pretty good job keeping track of where you are if your phone isn’t transmitting


Well the last phone I got, it was directing me to AirBnB's with masonic ties who liked to tell stories of violence. I had my first masonic lesson of keeping my mouth shout or be killed when I was at primary school, but I dont care now, life is overrated!


What is Google / Android doing to protect us from NSO group?

Seems like a lot less than the care Apple is taking on behalf of vulnerable people.


Well, Google happens to have teams that work to help shine a light on what the NSO is doing.


It's good to shine the light, and the Android platform still remains wide open.


Citation needed. Android exploits are more expensive than iOS on Zerodium which hints otherwise.


I didn't know that, I've perhaps wrongly been under the assumption that Android is inherently less secure than iOS.

It certainly feels "cheap" by comparison, but admittedly this is a horrible and stupid way to assess anything.

Play store doesn't seem to give much care about privacy compared to iOS.


“Just when I needed you most.”


> Wired connections with a computer or accessory are blocked when iPhone is locked.

Android defaults to charging only.


The same is true on iOS (https://www.theverge.com/2018/7/10/17550316/apple-iphone-usb...). Lockdown mode just prevents you from enabling it.


> USB Restricted Mode prevents USB accessories that plug into the Lightning port from making data connections with an iPhone, iPad, or iPod Touch if your iOS device has been locked for over an hour.

Android asks every time for every device. There is no 1-hour grace period.


only $10m? hahaha they don't care


If Apple could somehow make phone and sms not useless due to spam that'd really save the average person. They must have the resources to throw at something like this. I'm not claiming to be an expert, I'm not saying I'm right, but phone spam is fucking awful.


This seems to be a problem mostly localized to some countries. Device manufacturers should not be fighting a rotten network, the networks should be fixed instead.


Yeah but... here we are. In the US at least, I don't see this ever being addressed at the root. Everything between the user and the phone service is at least somewhat malleable, what's the problem with at least trying in one of those places?


Talk to your politician, it's a solved issue in other countries.


> If Apple could somehow make phone and sms not useless due to spam

1) A full solution to this problem is going to depend on mobile carriers making changes. It isn't something which Apple can unilaterally fix.

2) This is completely irrelevant to the purpose of "Lockdown Mode". It's intended to protect high-risk users from certain sophisticated threats -- it isn't a feature which most users should use.


Surely that's the responsibility of the providers, though? Apple can improve the situation a bit, maybe, but you'd really need to get AT&T & co to crack down on it to have any chance of solving it for good.

I know that I've had approximately zero spam on my German number (that I've had for ~2.5 years) - I'm sure why, whether I'm just lucky, or whether it's much more under control here. My UK number definitely had problems with spam, though. Maybe a couple of spam calls a week.


Nice, glad to hear it's at least reasonable elsewhere, It's very, very bad in the US, at least for my partner and I. We started getting unsolicited calls days after starting the house buying process because the credit reporting companies sell you off immediately. Very frustrating.


There are several redirection services that will pair your spam caller to a very chatty chatbot. Excellent way to make spammers pay.


Worst part of switching from Android (Pixel) to iPhone. It was shocking.


they do already do this, report the message as junk the number will be flagged as junk and messages from it will be filtered to the junk view.


Phone spam as in text messages? Your email is a whole other thing


Yes indeed email is a whole other thing, that's why I didn't mention it :)


I do not know why anybody would believe any claim by Apple with respect to security without overwhelming empirical evidence supporting their claims. The default assumption in commercial software security, supported by literal decades of abject failure by every player, is that commercial software security is atrocious. To claim anything more than trivial security is a extraordinary claim and thus demands extraordinary evidence before being accepted.

Apple has demonstrated no such evidence. In fact, the opposite is the case. Despite decades of assurances that their systems provide meaningful security, every single year we see their security torn apart by individuals and small teams with budgets that do not even constitute rounding errors to a Fortune 500 company. There is exactly no reason to believe they have meaningfully superior technical expertise with respect to security relative to the default standard of the industry.

However, this should be no surprise to anyone as the security certifications that Apple advertises for iOS [1][2] are only “applicable where some confidence in correct operation is required, but the threats to security are not viewed as serious.” [3][4]. I mean, look at [4], the process used to certify their security is that their evaluators typed search terms into the internet and verified that every vulnerability that turned up was patched, that’s it. There is no requirement to even do a independent analysis that it protects against attackers with a basic attack potential, that is done at the next higher level of security that they could have chosen to certify against, but did not.

To be fair, Apple has historically demonstrated the ability to certify against AVA_VAN.3 which demonstrates resistance to attackers with a enhanced-basic attack potential, but they have failed every time they have ever attempted to certify against AVA_VAN.4 which demonstrates resistance to attackers with a moderate attack potential. It should be no wonder that they can not protect against moderate attack potential threats such as individuals or small teams, let alone high attack potential threats such as large organized crime and nations.

If Apple wants their security claims to be taken seriously, they should start by demonstrating their ability to protect against moderate attack potential threats via the internationally recognized security certification process they already use and advertise. Until then, the only thing we should trust is what they certify they can do (protect against script kiddies), not what they have failed to ever achieve in a auditable manner (protect against moderately skilled attackers).

[1] https://support.apple.com/guide/sccc/security-certifications...

[2] https://www.niap-ccevs.org/Product/Compliant.cfm?PID=11146

[3] https://www.niap-ccevs.org/MMO/Product/st_vid11146-aar.pdf#p...

[4] https://www.commoncriteriaportal.org/files/ccfiles/CCPART3V3...


They would probably have a better chance of they applied the same budget to their security as their marketing.


Reminds me of a classic https://xkcd.com/538/.

For the vast majority of users the most realistic threat is simply being ordered to unlock their phone under the threat of force (from a criminal, a cop, a CBP agent, etc). This is way, way more likely than being attacked through an unknown JIT compiler vulnerability.

What would be really helpful is Apple implementing a way to have multiple iPhone profiles with plausible deniability (a la VeraCrypt) or some sort of compartmentalization (a la 1Password travel mode).

Of course that would mean people can start sharing their phones instead of buying one per person from Apple, so I'm not holding my breath.


Honestly, this is bad news, because it means Apple is no longer capable of offering both security and all features, but now needs to spit them into groups, presumably because they need to keep up with (the clearly less secure) Android...


Security has never been "Secure or not" proposition, it's always a balance between convenience and safety against threats, threats that change depending on who you are, and who is targeting you.

Some features are (understandably) almost impossible to make very safe. Take PDF viewing for example, the entire thing is so huge, that it's bound to be holes in any implementation, just like what the NSO proved some time ago with the iMessage exploit.

I take this effort as something similar to the "Hardened Linux" effort. Just that it exists doesn't mean that Linux is "unsecure", it just means that if you really need to, there is more steps you can take to make it even more secure. Just like what Apple is doing here.


If I could upvote you twice, I would.

Security is always a tradeoff and there is no single answer. A feature for one person is another person's hell.

An acquiantance just lost all their data because they had enabled "format on too many missed passcodes" and their kid was playing with their phone.. caused quite a few tears. On the other hand, that feature is invaluable to international travelers.


What a strange implementation of "format on too many missed passcodes". Apple (on iOS and watchOS) implements this, but after some amount of failures, phone gets into progressively longer lockdowns. So maybe after 3 failed attempts you have to wait 2 minutes, after 4th 5 minutes, and before the final (formatting) attempt you have to wait something like 12 hours. This prevents "kid playing with the phone" problem.


No, this is a completely reasonable response.

Security by reducing attack surface is a standard, and sensible response.

What you are asking for is that Apple (or any company) be able to produce absolutely 100% bug free code, no matter the complexity or requirements. This feature is an acknowledgement that what you're asking for is an unreasonable demand for any company.

So Apple has looked at the attack surface present by default, and then provided an option to that trades off removing presumably low use features in exchange for removing large attack surface. That is a trade off: for example any modern phone would be vastly more secure if all it could do is make phone calls, and everything - the browser, apps, etc - were disabled. But that end of the spectrum results in an impractically restricted device, in reality there's a middle ground, but for high profile targets the trade off is closer to "just a phone" than it is for normal users.

An example is the RW^X region required to support JITting JS - the OS simply supporting such memory region at all was a huge addition of attack surface to the platform - prior to that every single executable page was protected by code signing, afterwards there was a region that by definition the OS could not verify, and it has been used by every attack since then. But disabling that simply disables the JIT, the JS interpreter runs, so the impact is only that some web content runs slower, but the functionality itself is still there.

Similar for messages: receiving JPEGs is super common, receiving OpenEXR or whatever probably isn't, so removing everything other than JPEG by default again removes attack surface without realistically impacting the usability of messages.


I see this as securing against "unknown unknowns". No software can ever be "100% bug free". If you can identify areas that are more likely to contain yet-undiscovered vulnerabilities and turn them off in advance, the device becomes more secure.


Honestly, this is bad news, because it means Apple is no longer capable of offering both security and all features…

Absolutely not true.

There’s a difference between being secure and having all of the features and being secure against a state-level attacker. The vast majority of users are quite secure while enjoying all of the features of their iPhones.

For those who are being targeted, potentially in a life or death situation, being able to send attachments in iMessage is trivial by comparison. Only a tiny percentage of iPhone users should ever have to enable this; it won’t impact the user experience of over 95% of iPhone users at all.


Security and convenience _can_ coexist, but you can't transition into a more secure world without breaking convenient, insecure stuff that already exists and users expect it to just work. Later they can ramp this up.


And yet this feels like it’s too little too late. If I’m likely to be the target of the kind of state-sponsored malware “lockdown mode” supposedly protects me from I shouldn’t have been using Apple products in the first place. Which begs the question: what are current security best practices to protect from state-level hostile actors?


The current best practice is to have already been using an Apple device, and this will enhance that.


Really? Not something like Tails or Qubes? Am I too paranoid? I’m genuinely interested in learning about this. What am I supposed to use these days when I’m working on a project that would make me a target for state-level actors?


Tails and Qubes are desktop operating systems. You can't run them on a smartphone.


I'm guessing it will run afoul of the EU regulations. At the bare minimum there should be a way for level playfield - individual applications and third party application providers should have same access as Apple's apps!

* If Safari and Messages is allowed then all other apps should be allowed and have complete access to the device even in the lockdown mode. * If apple gets access to any traffic from the device in the lockdown mode, then all other applications should have full access to advertising metrics and device data as well.

At that point it's probably not much of a lockdown, but Apple can't have all the fun can it?


Marketing vaporware. The attacks used by NSO and other groups were exploits. You have two choices, either do not use the device with any external connection and not be exploited, or use the device with external services and be exploited. You cannot have it both ways. The underlying software will have to be written in a safe programming language, along with significant scrutiny on it's logic.

We will never have this, because we are ever decreasing the quality of software development. Apple could do something about this, but that is a politically and financially expensive move for no extra financial gain.

As always people will eat this up, and business will continue unaffected as usual.


If it reduces the attack surface by a lot, why do you call it vaporware?


So Apple is saying that their "Lockdown Mode" protects against "highly targeted cyberattacks from private companies developing state-sponsored mercenary spyware".

That's an interesting wording, because it claims to protect you against... nothing that matters. Notably, it doesn't protect you against:

- The police. Don't get me wrong, I am all for letting the police do its job fighting crime, even if it means hacking iPhones, but even if you got the police attention for a noble cause, Lockdown Mode won't save you, at least, it doesn't claim to.

- Foreign governments, as well as your own government. Notice how it mentions "private companies" specifically, as in, not public. And the cyberattacks themselves have to be performed by private companies, if the tools that these companies develop are used by government entities, it doesn't count.

- Cybercriminals, the kind who are after your money. They are not "private companies", and they are usually not state-sponsored.

- Terrorist organizations, mafias, drug cartels, etc... again, not "private companies", and while they may be backed by states, they typically work for themselves.

The technical aspects have value, and I think giving the user the choice of wearing a tinfoil hat is great, but the claim they are making is deceivingly weak if you read carefully.


Cybercriminals are often state sponsored and do a bit of government hacking and their own hacking.

This is kind of a silly comment.

Making a phone harder to hack will help with foreign (and domestic) governments that buy exploits to target individuals as well as hackers trying to make money off of similar exploits.


I am not saying Apple did anything wrong with their solution, I think it is great, really.

I then thought: isn't that something that terrorists and pedophiles will love? If it is really that effective, I expect that soon enough, we will stories about Apple helping very very bad people who are after your children, and I don't think Apple wants to be associated with crime, and I was wondering what would Apple strategy be.

And that's when I noticed that very specific claim "highly targeted cyberattacks from private companies developing state-sponsored mercenary spyware", what are the words "private" and "mercenary" doing here? They do nothing but reduce their claims?

I am not calling conspiracy here, and I really think what Apple did is great, but I suspect that specific wording is Apple being cautious, probably of potential association with crime.


The NSO group used links and attachments in iMessage. These protections would mitigate those attacks.


They used an "invisible 0-click exploit"; where you don't even actually receive a text message or need to click any links or attachments. https://citizenlab.ca/2020/12/the-great-ipwn-journalists-hac... Would Lockdown Mode prevent those?

And what about the SMS equivalent? https://www.firstpoint-mg.com/blog/step-by-step-silent-sms-a... Apparently the German authorities sent 440,000 "silent SMSs" for tracking purposes in 2010: https://www.heise.de/newsticker/meldung/Zoll-BKA-und-Verfass...


> They used an "invisible 0-click exploit"; where you don't even actually receive a text message or need to click any links or attachments.

AFAIK, yes, because Lockdown Mode disables any non-audited plugin code from running in response to the receipt of an iMessage message (which is what "disable formats other than images, link previews" et al really means under the covers.)


Cybercriminals and terrorist organizations mostly pay the private companies Apple refers to here to do their dirty work.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: