I always wonder what it takes to find this kind of exploit. Are the programmers at NSO group just the best in the world? Or are they incredibly lucky? Both? I’d love to know what a normal day at work is like for their engineers. Clock in, sit down at a…crazy expensive hardware and software testing station? Crack open a brand new iPhone and start probing away while referencing internet sourced chip documentation and software manuals? What does it even look like?
The NSO group are ex-Mossad who decided working for the government does not pay as well as making money out of exploits, probably obtained at the highest levels of top secret work.
So far, they have been tolerated by the Israeli government as they all went to the same schools, all did the armed forces service together, and all know each other. This allowed them to get a free pass so far. Privately, many of their ex-colleagues, are very critical of their lack of ethics.
All this will change, the day some of the NSO exploits will be used against Israel, the same way some of the NSA leaked tools are now used in the wild.
NSO group is ex unit 8200, which is military signals intelligence. So in American terms, it's the NSA not the CIA. The distinction is important in a country with mandatory military service. You get a large number of people who go through, get trained, and then leave because it never was a career. A number of them take their skills to the private sector.
Mossad, on the other hand, is a civilian intelligence service and I'm told there's a strong tradition that its members don't freelance their services after leaving.
"Most of this data is shared internally across the
IDF (as well as sometimes externally, cf. 3.3 below) to
the Unit’s relevant stakeholders, whether combat
troops, decision-makers or other intelligence agencies
such as Mossad. Or as Yair Cohen, who served 33 years
in Unit 8200, the last five (2001–05) as its commander,
put it, "90% of the intelligence material in Israel is
coming from 8200 […] there isn't a major operation,
from the Mossad or any intelligence security agency,
that 8200 is not involved in"
>"...Mossad, on the other hand, is a civilian intelligence service and I'm told there's a strong tradition that its members don't freelance their services after leaving..."
Tradition is not what it used to be:
"Black Cube: The Bumbling Spies of the ‘Private Mossad’"
"...Despite some missteps, Black Cube “has to turn clients away
because it cannot service all the demands,” said Mr. Halevy,
a former head of the Mossad, an Israeli government intelligence agency. He said Black Cube has worked on 300 cases since being founded in 2010 by two former Israeli military intelligence officers,
Dan Zorella and Avi Yanus..."
"Harvey Weinstein hired ex-Mossad agents to suppress allegations, report claims"
It's an important distinction. The fact that huge numbers of people rotate through the hacking side of 8200 (like the NSA, vast majority of 8200 members don't work on that) is what drives the supply.
Intelligence services typically have less turnover. Though that is changing, particularly for NSA, where people leave to go to contractors.
Also, frankly, describing NSO as ex Mossad just makes phone malware sound much more complicated than it is and much harder to stop. At the end of the day, its software, written by people in much the same way any software is written. It just exploits mistakes other software devs made so that it can run.
"by two former Israeli military intelligence officers, Dan Zorella and Avi Yanus."
emphasis on "military intelligence officers" i.e. not mossad. this is like mixing up the CIA and FBI. to an outsider they might appear the same, but that's not really the case.
"Ilan Mizrachi, a former deputy head of the Mossad, Israel’s intelligence agency, said that he sees nothing inherently wrong with former intelligence operatives working for civilian enterprises. “Some people I know went into journalism, some are consultants,” he said. “Among many other professions, some work for companies like Black Cube.”
Quote from the article:
"Despite some missteps, Black Cube “has to turn clients away because it cannot service all the demands,” said Mr. Halevy, a former head of the Mossad, an Israeli government intelligence agency..."
This is myth. Russian systems are suffering from malware just like others. And probably more, because it's easier for local criminals to target local companies. It might be true for a very tiny fraction of malware, but that's definitely an exception, rather than rule.
Of course if there are state-sponsored hackers (I'm not really aware if those exist, but I allow this possibility), they will target whatever their management points at. And with corruption it's pretty possible that some local business could be targeted as a part of some financial wars.
But majority of hackers are just some guys with some IT knowledge and zero morale. They'll buy some exploits and tools on black markets, duct tape them into something and release in the wild, waiting for profits (or police). They'll rob banks or babushkas, they don't care.
It is not myth for ransomware. Many documented cases. It's essential to the survival of these groups; local cops more likely to leave them alone if they leave local businesses alone.
> It's essential to the survival of these groups; local cops more likely to leave them alone if they leave local businesses alone.
Which is a huge misconception outsiders have about this scene. They are Russian-speaking, not Russian, just like English speaking gangs are not necessarily English. These groups may (and often do) consist of nationals of different exUSSR countries, sometimes without even knowing each other personally. They might not even be a single group, just some individuals doing different parts of the scheme. (including "press releases" and "interviews" they sometimes do)
It has been the case long before all this ransomware fad. Russia, Ukraine, Kazakhstan, Belarus, and partially Lithuania had world's top CC theft gangs for a couple decades, and they always been of mixed origin. They mostly steal EU and US cards because it offers better reward/risk ratio, compared to the home countries which are poor. But nothing stopped them from stealing CCs in Russia or Ukraine either, certainly not some mythical cops (who couldn't care less in reality); in fact, skimmers are widespread in those countries as well.
Ransomware groups are the same as CC thieves, it's just a different scheme; they probably avoid home countries for the same reason (same risk, less reward). The state can't possibly have too much influence on them, it just triggers the bullshit detector for anyone who lives in any former Soviet republic and knows about this stuff at least superficially.
It's specifically because Russian prosecutors couldn't care less if there are no Russian victims. By doing this they know there is next to zero chance of criminal proceedings.
> So far, they have been tolerated by the Israeli government
Why wouldn't the Israeli government tolerate them? If anything, doesn't their government benefit from groups like this?
They get access to spy tools that they didn't have to use taxpayer money to fund, and because it's former members of their own intelligence working on it, they have some semblance of influence over how it's used.
That's my understanding too. Funding is not really an issue, 8200 has one of the biggest budgets in the army but they are bound to the law and regulations, NSO on the other hand can pass the lines and keep Israel uninvolved
Not really. Israel likely openly shares secrets with other Five Eyes countries and so it gets a sort of free pass from geopolitical pressures. Its a mutually beneficial exchange. Additional to the Mossad comment, the Israeli students who work for these group take an entrance exam at 17 and that recommends them for what's known as UNIT 8200 which is a feeder network/NSA clone.
Israel is only peripherally and reluctantly involved in the confrontations with Russia and China at the heart of 5E interests, and it neither trusts nor is trusted by 5E countries to the level of sharing intelligence sources or tools except in specific, transactional interactions.
American and Israeli politicians like to talk about Israel being America's "closest ally", but those are just pretty words. Israel's real selling point to the US is that it's a low-maintenance ally.
The United States has thousands of troops deployed across the Gulf to defend its allies there. It has another several thousand as a "tripwire" in South Korea.
US troops have died in combat defending Saudi Arabia and Kuwait. They've been killed by militants directly supported by Pakistani intelligence services.
How exactly is Israel "high maintenance" by those standards?
If you want to define away sending hundreds of thousands of troops to defend Saudi Arabia, using those troops to free Kuwait from foreign invasion, and then keeping those troops in both countries (where they've taken everything from car bombings to shooting attacks) as defending her own interests rather than those states, then you can define away any action taken on behalf of an ally that way. To take this to an extreme: by that definition, US defense of South Korea isn't "aid to an ally".
There is a legitimate argument that US aid to Israel isn't well thought out rationally, but the only reason that's plausible is that a few billion a year and low-cost diplomatic statements/votes aren't a big enough deal for the Serious National Security Considerations to come into play.
I think the hostility encountered by the US in the Middle East is entirely a function of protecting her own interests in a complicated and contested region. Maybe necessary, definitely inevitable.
The human suffering on all sides is a cost of doing business. This is deemed acceptable by the US govt and not contested by the hosting countries for various bad reasons. It is nothing more special than that. There is no grand righteous moral justification, but that is a useful fiction.
I apologize if this offends you, and I don't share it to be disrespectful -- just to explain my perspective.
I mean, sure. The moral question is important! But I was starting from a thread of people who didn't understand the real-life character of the Israeli-American relationship.
If you're trying to describe the actual actions of the parties involved, morality is not a useful analytical or predictive tool; that comes into play when you yourself try to act.
It gives Israel military aid on the order of $3-4B per year. On US budget orders of magnitude that's peanuts, and comes with none of the US troop or naval commitment of e.g. the Saudi or Korean alliances.
> All this will change, the day some of the NSO exploits will be used against Israel, the same way some of the NSA leaked tools are now used in the wild.
Yes. The bipartisan USA Freedom Act limited several aspects of the NSA's dragnet [1]. Amendments weakening the bill were defeated [2]. Less materially, a documentation requirement for § 702 searches of U.S. persons was added in 2018 [3].
I’m skeptical the NSA doesn’t just ignore or creatively interpret laws it doesn’t like, given their past history and the consequences for their misbehavior.
I mean when the CIA got busted not only spying on Congress a few years ago, but also lying about spying on Congress, they were told “don’t do that again please.”
It's mind boggling Clapper wasn't crucified for this.
This sort of thing keeps happening and some sketchy outsider may get elected with catch phrases like "Drain the swamp". Oh wait...
It’s also the Mossad/Israeli government realizing that their capabilities and interests can be advanced by having the hacker mercenary services for sell.
the high tech industry in Israel is not that big. If you look at the companies that make COTS microwave and millimeter wave telecommunications equipment, they're not too different from the other .IL companies which make advanced radar systems, jammers, and avionics for aircraft.
I imagine it's similar for black/grey-hat software development.
Look at the exploits Google's Project Zero find for a less clandestine example. No doubt they employ clever people but you don't have to be superhuman to find vulnerabilities in code. Part of it is paying people to sit down and work on it fulltime.
"This has been the longest solo exploitation project I've ever worked on, taking around half a year. But it's important to emphasize up front that the teams and companies supplying the global trade in cyberweapons like this one aren't typically just individuals working alone. They're well-resourced and focused teams of collaborating experts, each with their own specialization. They aren't starting with absolutely no clue how bluetooth or wifi work. They also potentially have access to information and hardware I simply don't have, like development devices, special cables, leaked source code, symbols files and so on."
Yep, Apple themselves will find exploits, white hat hackers will find exploits, Project Zero or Microsoft teams will find exploits, and so will NSO or other blackhats. It is a mix of luck, skill and putting in the time. NSO has successfully monetized their exploits, allowing them into then invest the money back into hiring more people, which increases the luck/time put into it.
Saw the thread title & clicked through to post exactly the same :)
It's a great set of episodes. This is without a doubt my favourite podcast. 2nd favourite being Knowledge Fight, which debunks Alex Jones and the nonsense that he spews on a daily basis.
They probably hunt exploits like that, but what is quite likely is that they have access to stolen Apple source code and scour it for type overruns like the one in CoreGraphics that is the cause of this exploit. I would estimate that the majority of exploits are the result of source code theft, leaks of potential vulnerabilities from people who have access to the source code and social engineering. There isn't anything particularly special about a "Mossad" trained or "NSA" trained hacker. They are engineers like many of us and prefer the path of least resistance. Trying to brute force buffer overruns without having source code access is tedious. Why go to all the effort to black box exploits when you can take advantage of source code analysis.
I mentioned in another post about why people would leak to the press, when you most likely will get caught and fired. Leakers of a different caliber will leak source code to governments and companies like NSO and have much less likelihood of being caught and much higher remuneration.
You estimate wrong. I've been in infosec for over a decade. We look at binaries. It's not that hard. In fact, it's often easier, since type conversion errors are often a lot more apparent in a disassembly, where you can see exactly what operations are being performed without having to know exactly what the language rules around signedness and integer promotion are, and without having to follow through complicated type hierarchies. Similarly, a good optimizer will strip away layers of software abstraction and make what's actually happening more evident.
There is value in source audits, but you're wrong that exploits come out of stolen source. That's exceedingly rare, and usually quickly publicly leaked when it happens.
> Similarly, a good optimizer will strip away layers of software abstraction and make what's actually happening more evident.
I can attest to this, I've found it's frequently far more satisfying to debug at -O3 than -O0. At O3, the disassembly really lays bare the invalid assumptions that were relied upon.
I respect your expertise and agree that good tools can help find potential vulnerabilities.
You aren't the first person to say that exploits created as a result from source code theft are rare and the theft is quickly publicly leaked when it happens. Why do you think this? I would think that unethical players like NSO Group would have even more motivation to ensure the use of stolen source code is never revealed.
Because I've been doing this for years and I know how we find exploits; we don't need source. Why would NSO need it?
NSO isn't an "unethical" player, they are "ethical" within their own twisted ethics (that most of us don't agree with). They aren't a spy organization outside the law, they're a company building tools for (supposedly) law enforcement. Being caught doing something blatantly illegal like using stolen source code would be the end of them. They can't afford that risk. They have absolutely no need to use source code. There are zillions of binary-only techniques for finding exploitable bugs (e.g. fuzzing). Source code just isn't nearly as useful as you think it is.
If you want a practical example: just a few weeks ago I got ahold of a peculiar, wholly undocumented embedded device (can't even find teardowns on the Internet, no public firmware downloads, etc) and within one day I had a remote root exploit working - this wasn't using an existing CVE in a library, this was a bespoke bug in this device's firmware, and the exploitation involved reverse engineering two authentication token algorithms and a custom binary communications protocol. No source code. Obviously this isn't iOS, which is quite bit more hardened, but that should give you an idea of just how easy it is to find exploitable bugs with just something like Ghidra, if you know what you're doing (I was: I was looking specifically for a kind of bug likely to exist, to narrow down the possibilities of where it might be present, and eventually found a suspicious point of attack surface that indeed turned out to be vulnerable; then it was just a matter of reverse engineering enough of the protocol and token requirements of that code to be able to actually trigger it remotely).
I was actually kind of annoyed it took as long as a couple hours to find it (once I had a decent understanding of the rest of the system); I was expecting even less, but it turned out they did a better job than I expected avoiding some of the classic mistakes - but not a good enough one :).
> I would estimate that the majority of exploits are the result of source code theft, leaks of potential vulnerabilities from people who have access to the source code and social engineering.
No. Some Apple source code has publicly leaked (iBoot) but stealing this kind of stuff is bound to leak. And reversing binaries for vulnerabilities is not that much harder.
They recruit people who were trained to find exploits, it’s less about having the best programmers and more about having people with a specific set of learned skills and dedicating them to this task.
I would be surprised if their core iOS research team is much more than 10 or so people at any given time.
They also probably use brokers and buy at least some of the exploits they use from freelancers if they offer ~7 figures for a zero click exploit a lot of freelancers will be working on this too.
It’s just like any bug bounty program, internally you run a small and dedicated team and externally you pay enough to entice freelancers to spend their free time on your systems to scale it further.
It takes IDA Pro, some low level asm/C++/Python programming skills and a lot of hours.
Reverse engineering is not that complicated, however getting some results is difficult and time consuming.
In that example it's basically looking at how some libraries are parsing input, that's it. Since everything in those phones are C/C++ nothing is "safe".
It's the same skills you need to crack games, cheat in online games etc ...
It would be quite difficult if you can't get access to the binaries that you have to put into IDA (or, well, Ghidra, for that matter, but IDA Pro is probably better).
"Are the programmers at NSO group just the best in the world?"
The parent comment seems to imply that someone who can find programmer mistakes is a better programmer than one who actually writes software for the public. If thats true then wouldnt it be reasonable to prefer to use message software written by NSO instead of Apple. Why dont "security researchers" write the software we use instead of "software engineers".^1 Which group would be more likely to have "the best programmers in the world" who would be the least likely to make mistakes. Honest question. Im not trolling. I think about this question all the time.
1. Some of the programs I use and rely on everyday, even more than something like "iMessage", were written by people who claim to work in "security" or "research" (or even teaching math to university students) not "engineering". I have no complaints about these programs. Yet I have plenty of complaints about the software foisted upon us by Big Tech.
It’s just a matter of the two groups having different skills. One group writes for the general case while the other specialises in corner cases.
The latter looks really impressive when it’s done well, but it’d be silly to expect someone with deep security knowledge to sit down and build a spreadsheet manager from scratch. The two skill sets are just different. There is no “best”.
The hard part is not necessarily finding the programming mistake so much as figuring out a way to reliably exploit them. Back in the day before ASLR and other mitigations it was really straightforward, but modern OSs have much more sophisticated countermeasures to prevent buffer overflows and user-after free bugs to enable RCE.
"Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it."
Exploit development is a skill like any other. Instead of learning things like software design patterns, distributed systems, software reliability, etc you would have spent time learning about memory layouts, OS designs, mitigation techniques, decompilers, etc.
A lot of times it is just poring over code looking for bugs that have already been found in other locations in the code.
For example. this is a use after free bug. You can statically analyze disassembled code to find places where this might be happenning, and then figure out how to exploit that instance of the bug.
If you have an organization that can legally hire people, pay them a stable salary and legally sell exploits to all sorts of people around the world you end up with NSO.
NSA finds exploits for their own mission and Google Project Zero researches vulnerabilities to [per their claim] ensure internet stays a secure platform but neither of them sell exploits for profit like NSO.
So, no, they're not the only "genius"es out there. They just are less ethical about it.
These are security teams doing capture-the-flag competitions, you can literally walk up to them at in-person events and say hi if you'd like. There's nothing illegal going on here.
I think it's more that the possibility space for exploits is so large that a dedicated force of highly creative reverse-engineers is all you need to dig them up.
From what I've heard it can be almost trivial to find them if you know what to look for. But it seems that very few people know exactly where to look, and fewer still understand how to interpret the results.
https://www.youtube.com/watch?v=zyHI2Ht3OAI Jiska usually finds couple remote exploits a year by just looking at new component/subsystem. Its all dumpster fire underneath :(
Zerodium will pay up to $2,500,000 for no-click iPhone/Android exploits [1]. I'm sure they'd only pay that much if they were highly confident they have clients who'd pay enough to make the risk and investment worth it.
Come on, since jailbreak discovered (checkm8 as king of it) you can run pretty much anything just on iphone itself including automated tests, fuzzing, debug and crash dump analyses. Break is always easier than build.
iMessage plagued with such bugs since 2010, the question is how it is not yet rebuild up to decent quality. Retarded security measures like blastdor or aslr is irrelevant as these mostly a security theatre that just require extra step to avoid.
It's not too esoteric, fortunately. The short explanation is they are a part of the Israeli gov, as with all tech companies in that territory, so that gives certain material advantages to their preferred companies, just like how USA does with offense contractors like Northrop.
Basically, they are propped up by their gov, and that is the major problem.
> I always wonder what it takes to find this kind of exploit.
A lot of knowledge about the target system's internals (comes with experience) and probably a lot of investment in fuzzing infrastructure or A LOT of time reverse engineering and reviewing manually. Finding bugs in closed source software by hand is incredibly slow and painful.
The few most recent episodes of the "Darknet Diaries" podcast, which are relevant, including interviews with CitizenLab, descriptions of how NSO works, Black Cube, and the market for buying exploits from Argentina.
As someone who has some familiarity with the people and processes, this response seems extremely off to me.
> Selection starts from age of 4
Care to share your sources for that? As far as I know most are self taught and get some further training in military.
> Boring.
It might be boring to some and might be extremely interesting for others. People who like solving puzzles and facing hard challenges usually like it. Of course, if your passion is building you wouldn't like it as you don't "build" something new.
> Usually a group of introverted young kids that look at their own shoes while talking to you, led by an extroverted young kid, that looks at your shoes while talking to you.
Have you met these people at all? Because it definitely sounds like you haven't and you just describe the typecast some movie would use.
My children were attending/graduated/served kindergarten/school/army in Israel and I saw selection process as a parent.
My wife was a school teacher in Israel. She described to me some of the evaluation metrics she was supposed to submit every half a year over each and every pupil she had.
I have lots of friends who are ex-8200 (high levels of hightech are surprisingly full of them actually) and this is the first time I hear about that. If you mean that selection that happens at 17yo is based on grades and teachers evaluations since kindergarten - that might be, but it sounds different than "selection starts at 4yo" which implies that 4yo kids are selected and followed all their life.
1) at age of 4 all the parents were gathered to meet kindergarten personnel. They explained that kids will play games all year. Parents were separated to groups and given logical puzzles to solve. Results were noted.
For the next two years children were playing games with changing rules to negate natural ability for specific game and to select for ability to find the best strategy within current constraints.
At the same time each parent is given a day to present his/her profession. Results are noted.
Results were passed to school class selection committee.
2) According to results in kindergarten kids are grouped in schools. Some are given opportunity to participate in electrical engineering or robotic activities (my daughter was Top 5 in Israeli competition for 6-9 years old with reduced team).
3) By the end of the second year some of the parents are notified that there will be an examination. Test is analogous to IQ (math, language, general knowledge). Graded on the curve for municipality. Top 8% are invited for one day a week for additional activities. Top 2% are invited to special schools with much more intensive program. My daughter made it to top 8%. Activities are: decision making, finding solutions within constraints, leading groups of people to solve bigger problems.
4) By the end of elementary, depending on previous results, kids get access to full math program (as opposed to reduced arithmetic). Additional activities include software and electrical engineering, robotics, chemistry, physics and so on. Parents and kids, that didn't made it to Top 8% at previous years, are not aware of these activities (invitations are sent personally).
5) At age of 15 kids pass initial evaluation by IDF. Good grades at high school will guarantee initial evaluation will be upheld. Bad grades will negatively impact the chances.
6) By the end of high school whole history and psychological profile are passed to IDF for final evaluation.
> What does this have to do with the military?
In Israel everything has everything to do with military.
One person's boring is another's career culmination. Breaking system security often consists of dead end after dead end, and even if you get a lucky break, you may hit another dead end after that. Finding an exploit often isn't enough these days, they need to be chained together to actually get somewhere interesting. Personally, it's very unrewarding (aka boring, imho) work most of the time because you don't find anything a lot of the time. (The high off of finding something is something else tho, lemme tell you.) If you're interested in the sort of work involved, http://microcorruption.com is a good CTF to start out on.
> Are the programmers at NSO group just the best in the world?
Most people who are good at this are working for national security orgs, blue team in the private sector, or cash focused criminals. This is the relatively small group of people who are comfortable selling tools to help dictators hack journalists up with saws.
I recently learned of this group through the Dark Net Diaries podcast. The host does a pretty good job of covering the NSO group in episode 99 and 100.
I heavily recommend reading “This is How They Tell Me The World Ends” written by one of the guests he had in episode 98, Nicole Perlroth (which also touched a little on the NSO in that episode). She’s The NY Times cybersecurity reporter. A lot of the book focused on the NSO, among others.
The noteworthy angle/point the podcast covers is that NSO is very likely indirectly trying to dig dirt on citizenlab people (same people the post above is from) as they regularly discover their exploits and cost them money. As Jack talks about at the end, this puts NSO group into a whole other category if the above is indeed true.
That's exactly what it is. These companies buy, research and stockpile exploits, and keep a few always at ready for when the currently deployed ones get burned. All exploits have a shelf life, and the more widely one is used, the more likely it is to get caught.
Because let's not forget: NSO and their ilk are not in the business of developing exploits. That's just their raw material. They are in the business of selling weapons-grade espionage and surveillance capabilities.
This episode just came out last week, and this is the second time NSO has made news since it aired (along with Germany being a confirmed client.) Surprisingly apropos, but I imagine Jack's disappointed the big news makes it just after his episode's release on the subject.
Someone remind me why Germany needs to be installing Israeli spyware onto citizens phones? We know this software's only purpose is to track down wrongthink and then murder dissidents.
Massive blow to the integrity of European telecoms.
Can you stop posting this bait in every single thread about the NSO? It's really annoying that you repeatedly drag people into shallow semantic arguments for dumb (nationalistic?) reasons: https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...
Frankly, that is rich coming from you considering that you do this often enough that I have several of these specifically directed at you: https://news.ycombinator.com/item?id=25492587. Posting flamebait and then editing it to make the people who respond to you look stupid is against the guidelines. Posting "corrections" or "gotchas" every time a topic comes up is not striving for accuracy, it's being purposefully misleading to violate the guidelines. I am sick of you pretending each time that you aren't seeing the many people who tell you you're wrong or that you should stop. Until now I had held out hope that you were going to stop at some point, especially considering your productive contributions elsewhere, but I think I've given up now.
"The Guardian reported this year that hundreds of thousands of euros of Yana Peel’s legal bills were expensed to the NSO Group by her husband – another move that apparently angered his partners.
Stephen Peel’s lawyers said at that time that the “manner” in which the legal fees were paid had been approved by Kowski and Lueken, and he strongly disputed the suggestion that the payment of the expense claims was a source of disagreement between the partners.
Peel, Lueken and Kowski are all now involved in a legal dispute over the future ownership of the firm they created."
If you're interested in infosec/appsec, DND is a great place to get started. The host packages up stories in a well put-together way, has no qualms about breaking to explain a concept or term, and does it all within an hour.
It is increasingly bizarre in my opinion how this company (and others like Toka) can run active terrorist operations, that if anyone else smaller was doing some of the same hacks they would be in prison for a very long time.
People have lost their lives due to these pariahs!
Israel already has a massive PR issue with other countries, it would do them well to reign in these offensive front arms of their government/'companies.'
Citizen Labs is really a great thing for civilization. There are not enough altruistic organizations.
The basic issue is that every nation is actively buying and using zero-days and doesn't want to stop. And companies like NSO aren't really (so they say at least) hacking anybody. They just develop and license hacking tools to governments to use for "lawful" law enforcement purposes. So nobody wants to ban the zero-day market because every country is a huge buyer of zero-days themselves and it is hard to ban selling zero-days to sovereign governments who are using them in accordance with their own laws (even if the regimes in question are terrible and using them to violate their citizens basic human rights). After all, it would be a bit awkward for the US to demand that the NSO Group stop selling it's hacking tools to Saudi Arabia while we have a multi-billion dollar defense industry selling the Saudis all sorts of advanced weaponry.
> Israel already has a massive PR issue with other countries,
But for these middle eastern countries Israel selling them exploits which allow them to spy on dissidents may actually improve relations by helping out regimes which would otherwise be sworn enemies of theirs…
It just makes me so uncomfortable that these things keep happening. We always find out about these things eventually but what percentage of the time are our devices vulnerable? Isn’t it close to 100% of the time that our desktops and mobile devices have significant security vulnerabilities?
The way I describe it to friends and family is that there are basically two levels of protection:
- Protecting yourself from rub of the mill malware that is looking to make money off of you. You can do this pretty effectively by always updating your software as soon as you can and avoiding sketchy and unnecessary apps and websites
- Protecting yourself from an attack by a nation state level agency. I don't think there is any way to be safe from this, and people who are targeted like this need to use protection that go well beyond the choice of cell phone or chat app
Basically, you’re either dealing with Mossad or not-Mossad. If your adversary is not-Mossad, then you’ll probably be fine if you pick a good password and don’t respond to emails from ChEaPestPAiNPi11s@ virus-basket.biz.ru. If your adversary is the Mossad, YOU’RE GONNA DIE AND THERE’S NOTHING THAT YOU CAN DO ABOUT IT. The Mossad is not intimidated by the fact that you employ https://. If the Mossad wants your data, they’re going to use a drone to replace your cellphone with a piece of uranium that’s shaped like a cellphone, and when you die of tumors filled with tumors, they’re going to hold a press conference and say “It wasn’t us” as they wear t-shirts that say “IT WAS DEFINITELY US,” and then they’re going to buy all of your stuff
at your estate sale so that they can directly look at the photos of your vacation instead of reading your insipid emails about them. In summary, https:// and two dollars will get you a bus ticket to nowhere. Also, SANTA CLAUS ISN’T REAL. When it rains, it pours.
I think this understates the threat of privatized hacking tools. Governments that can barely tie their shoelaces now have access to capabilities that only a few heavy hitters used to have. One example: In Mexico NSO software was used to target anti-obesity activists who were pushing for less soda pop consumption.
The funny thing is that despite all of this high end, super secret, extremely sophisticated technology used against them, those activists won in the end.
> Protecting yourself from an attack by a nation state level agency.
My personal data was hacked by a nation-state level agency. The only way I could’ve prevented that is by not working in a national security position for that country’s geopolitical rival.
Now the only thing I can reasonably do is avoid ever stepping foot in that country lest they detain me for “extra questioning.”
Eh, thanks but don’t feel bad for me. There’s hundreds of other countries I can visit. I feel bad for the dissidents who are targeted within their own country and have no hope to leave.
This is sort of in the middle. NSO Group's exploits are surely expensive, but they are also not pinpointed. The states buying these exploits aren't spending the unlimited resources at their disposal to do the exploitation, it just costs them cash. This is one of the thing that likely promotes proliferation of this stuff, since it is so easy to pick another target.
So I do think there is a level between these two where you can be defended against nation states that will use COTS-equivalent exploits against you even if you won't resist an active attempt by a full team targeting you very specifically.
But doing this is hard as hell in the modern world, because so so so much of our device surfaces is riddled with memory errors.
“Nation state” is a well-defined term in the political sciences, and we misuse it here on HN all the time. To quote Wikipedia:
“A nation state is a political unit where the state and nation are congruent. It is a more precise concept than "country", since a country does not need to have a predominant ethnic group.”
Nation-state is often used in a different sense to distinguish the participants in the Westphalian system of sovereignty from other entities that might be labelled nations and/or states; this use derives in part from the fact that the Westphalian system is itself considered the turning point to nation-states (in the sense the parent describes) as a general norm, and that the participants in that system are generally also nation-states in that primary sense. (While “state” alone is often used for this where context makes it clear that this sense of “state” it s intended, there are lots of other uses of “state”—particularly for subordinate units of certain Westphalian sovereigns—which can create ambiguity, and “Westphalian sovereign” is a lot more cumbersome than “nation-state”.)
But the Westphalian system explicitly emphasizes the importance of the boundaries of the state vs the size of those boundaries. The HN usage tends to imply that “nation state” is something particularly impressive. But “an attack by a San Marino-level agency” doesn’t convey that same level of impressiveness.
Yeah, in security, “nation-state level actor” is used to mean “the most capable category of attackers, most (all?) of whom are particularly powerful nation-states [0]”, not “attacker at the level of at least the least-capable nation-state”.
Russia is 81% ethnic Russian, per Wikipedia. I think that's close enough to qualify for "nation and state are congruent".
Sure, it might make more sense to define this as "state-level agency", but that would confuse things for Americans. My internet security threat model ignores the state agencies of Montana just as much as yours ignores those of San Marino.
Well, perhaps the original poster was using it accurately.
In my experience, the common HN usage really translates to “country with a big military budget”, which is not at all what the term means.
Neither the US nor Russia are nation states. China and San Marino are both nation states. I’m guessing the poster meant “countries like the US, Russia and China”, and not “countries like China and San Marino.”
Honestly I think they just mean "state". Yes, some states have more resources than others, but the ones without a lot of resources generally aren't engaging in cyber attacks, and "state" as a general category is good enough summary.
I think people say "nation state" in part just because it flows better rhythmically, and in part because of that whole "westphalian" thing; and because the word "state" has other confusing meanings (including in CS, state as in 'state machine'; and the 50 USA states).
But really on HN when talking about "threat actors", they mostly just mean "state-level". (See I had to add -level to make it rhythmically like 'nation state' again, the one syllable 'state' is just too short it just plops into your sentence ruining it)
[Hey, why is it called the United Nations instead of the United States anyway? Oops, cause there already is a United States. But the UN is clearly an organization of States not Nations. But the things are conflated and confused generally in European nationalist ideologies of the 18th-20th centuries, that have affected our vocabulary and concepts for these things, it's not just HN. "Nation" is often used as a synonym for "State", so "nation state" ends up just kind of doubling down]
I say "state-level actor".
Almost any contemporary liberal democracy (and not only those) at least formally defines itself as a state of it's citizens, not belonging to any particular "nation" (ie ethnicity basically) in particular. I don't see the point in distinguishing between states that are "nation" states or not in the 21st century, or think that it has a clear distinction.
>Hey, why is it called the United Nations instead of the United States anyway? Oops, cause there already is a United States. But the UN is clearly an organization of States not Nations.
States are sovereign political entities; of course modern countries tend to have a federal state made of several constituent states (see: USA, Germany, etc) where each claims certain jurisdiction. In ancient times there were city-states like Athens, Sparta... and even in 18th century Europe cities like Venice were states (Republic of Venice).
Nations are people united by something they have in common. That could be shared history, language, culture, the geographic area they live in, or something more abstract like fandom of certain sports teams or other hobbies.
There is considerable overlap between nations and states, and given state is already overloaded, extra words are added for clarity.
I like "state-level" because these sorts of exploits and attacks are really about resources, not sovereignty, territory, etc. The fact is a rich person or company could fund a team that does vulnerability research and get results on par with the top tier folks already doing it.
And, the UN should be called the "United Countries" since it is really about territorial areas. They admit members based on geographical claims; I don't see any ethnic, cultural, or fandom group (that isn't in control of some territory and thus also country/nation) as a member.
It's to distinguish the hypothetical attacker and their resources from an individual or group of individuals. The threat to my personal health if Mossad is after me vs a particularly violent jilted ex-lover vs if I took down the local gang/cartel/drug dealer (ie they all want to kill me) but the level (and possibility) of defense against each of those threats are vastly different.
> I don't think there is any way to be safe from this
Apple could certainly do a lot more to protect their customers, and we generally let Apple off far too lightly here. For starters: using their enormous revenues to bid up the prices for these cracks. Writing better software, eg using well-known techniques to harden imessage. etc.
Also they could treat their employees better so there’s less churn. Every newly-hired kernel engineer is bound to repeat the same technical mistakes that their predecessor made a decade ago.
But is this because computers fundamentally cannot be made secure, or due to backdoors and sloppy coding? I’ve heard BSD is pretty secure right? Couldn’t we make phones that secure if we didn’t bloat them with flashy new features every six months?
The problem is that we're moving into a more and more digital world where it's not possible to even opt out. Estonia had their ID card photo database hacked.[0]
>A hacker was able to obtain over 280,000 personal identity photos following an attack on the state information system last Friday. The suspect is reportedly a resident of Tallinn.
>The culprit had already obtained personal names and ID codes and was able to obtain a third component, the photos, by making individual requests from thousands of IP addresses.
How do you protect yourself against that when the government requires you to have an ID card and puts you into the database? What happens when financial transaction logs get hacked or medical histories?
Yeah I find it worrying how society only cares about what is technically possible and not what is realistically safe and secure. We could build taller and cheaper buildings if we ignored standards and just accepted that sometimes they fall over. But we don't because that is insanely dangerous.
But now with tech the risk is invisible unlike a collapsed bridge. In Australia it is basically impossible to live a normal life without bringing your phone everywhere because they mandate that you scan QR codes before entering stores and the manual written forms are usually hidden behind a counter and on request only.
> I feel much safer knowing that an exploit like this is worth hundreds of thousands or even millions of dollars.
I don't. Look at how much companies like Apple pay out for responsible disclosure if they pay out at all, and then compare it to what exploits go for on the grey/black market. Typically the buyers have deep pockets and burning millions of dollars wouldn't make them blink.
Why does it matter if it’s the “good guys” or “bad guys” paying?
If a vulnerability only cost ~$100 then a malicious person could compromise an ex lover’s phone, for example. The fact that they are expensive means that their use is limited to targeted, strategic attacks. You don’t have to agree that those attacks are good, but surely pricing the average person out of 0-days is better than the alternative.
> The fact that they are expensive means that their use is limited to targeted, strategic attacks.
There are organized crime networks that pull in billions of dollars of revenue a year. If they wanted to pull off dragnet fraud, for example, they have the funds to do so.
>Why does it matter if it’s the “good guys” or “bad guys” paying?
Who do you think are more likely to use the vuln/exploit on regular everyday users? The nation state people are going to use it on targeted persons/groups (typically) while the "bad guys" are going to use it so they get the greatest bang for their buck.
Or the nation state uses it against everyone in a dragnet operation? Also, specifically targeted people by nation states often are "regular everyday users". They just happened to draw the ire of the wrong person.
Yes, but it can be somewhat mitigated by not using SMS or iMessage.
Don't share the phone number of your sim with anyone for any reason whatsoever (or don't put a sim in the phone at all and use an external wifi router (this is what I do), or use a data-only sim), and ensure that iMessage and iCloud is disabled.
This doesn't make your phone invulnerable, it just makes it less vulnerable.
That's exactly why I started scratching my head as to why the web entire security model assumes a trusted execution environment. That no longer makes sense in today's world.
Naively to me it looks like it's an artifact of 90s OS security model. The modern web, and the threats of the modern world require more stringent security facilities at the OS level to allow isolation of security context even to super users and specifically per program-origin, per identity, and per-process context isolation. Super users having the ability to read-write in any security context is no longer appropriate, at most super users should only be able to deny and delete, that's the only way to protect end-user privacy.
Sandbox escapes are part of most serious exploit chains nowadays. They make things harder for exploit authors but absolutely do not fix the problem at a fundamental level. iMessage runs in a sandboxed environment. Doesn't stop the exploit in the article from getting root.
you would expect quality from a commercial product because all of the investment being put into a product but these exploits are saying otherwise. open source projects may have more investments that care on a different level. we might have to figure out a way to go in that direction eventually considering how dangerous this is getting, many people depend on the quality of a product to ensure safer communication, and with some it is a life and death situation. do yeah it’s sad that this keeps happening, it seems like we can think of a better way to not make this happen as often.
One company, which likely has a retention problem, is writing all of the code for your system and setting things up so that you can’t easily use anything else.
Do you think this is a recipe for secure computing?
I think it's mostly that people are continuing to use file format parsers that were written in unsafe languages in 1998.
I do sometimes wonder what a "Manhattan Project" of software security would look like. I do think rewriting all common file parsers in <X> would be a very achievable project with a budget of a few dozen million dollars - nothing compared to the potential savings. The issue is then getting people to actually switch over. I think that a PR push by NIST et.al. could help convince the slowpokes that the "industry standard" has changed and they need to do something to avoid liability.
How do you estimate the financial damages here though? It's not like anybody's really going to stop buying iPhones over this. Not to any real degree. There's some brand damage to Apple but that calculation's highly debatable and swings around wildly. Which is the problem. Digital security is impossible to put a price on, because until someone is actively exploiting it, it costs WAY less to do nothing about the situation.
Yes, in fact, if NSA, China's MSS, Mossad and other nation states are betting on these kind of exploits to exist in order to do their really dirty work (even if they contract it out to NSO Group), the "benefits" would be detrimental to them.
With kinds of resources Apple has, you could be writing a PDF parser from scratch in Rust or Swift (it is 100% memory-safe, right?) or whatever else kind of "in the background", maybe as an experimental project, and then replace the existing one with it when it's mature enough.
Microsoft at least started rewriting some components of Windows in Rust. Though they aren't saying which ones.
It is starting. I've seen big companies start shifting towards this future over the last couple of years. In discussions with other security professionals across various companies, it is appearing more like an inevitability that a shift to memory safety is coming, in one way or another. It is moving slower than I'd want, but the discussion feels very different than it did just three or four years ago.
Sure, tech companies and even just random people are already working on it piecemeal. I just think of someone with resources put a concerted effort into it we could replace all the parsers of un-trusted data in e.g. Chrome within 2 years. If a government did it then it can be justified by benefiting all of society, rather than one individual product team having to justify the effort for their own use.
For those that are uncomfortable with this state of affairs, I recommend this presentation: "Quantifying Memory Unsafety and Reactions to It" https://www.youtube.com/watch?v=drfXNB6p6nI
It's the same as asking, what percentage of the time is science wrong? 100% of the time, yes. We're trying to approximate correctness and the plan is to get a bit closer every day as new information becomes available.
At least in Android the level of security is comparable with Win 3.11 for Workgroups. There is no access control except all or nothing. There is an OS which actively spies you.
Their high-confidence attribution to NSO Group is described as being based on two factors:
1. Incomplete deletion of evidence from a SQLite database, in the exact same manner observed in a previous Pegasus sample;
2. The presence of a new process with the same name as a process observed in a previous Pegasus sample.
But isn't it likely that someone with the skills needed to discover and weaponize a chain of 0-day exploits, is incentivized and able to detect these quirks in Pegasus samples and imitate them, with the goal of misattribution?
Of course, there may be more factors involved in the attribution that aren't being shared publicly.
It seems like incomplete deletion of data is an error. If you are an exploit developer looking to throw investigators off your trail, it is one thing to name your processes with Pegasus names. It is another to deliberately introduce errors in your exploit to appear like Pegasus.
Your proposal is possible. It is just less likely than that this exploit was developed by NSO Group.
It usually is the US, China or Russia though; the three have a large number of experts for this. And unless you find an error in the attribution processes, they are most of the time backed up by data that appears plausible, like a server or code fragment
Interesting that you're leaving out Israel from your listing while the very subject of this article is Israeli offensive cyberwar and espionage capabilities and a profound lack of ethics.
What I was trying to convey originally is that attribution is politically expedient. If you want to saber-rattle towards China you task Mandiant to find proof of Chinese hacking, if you want to blame Russia Crowdstrike gets the job. It's like employing McKinsey consulting to give a veneer of credibility to a predetermined outcome.
Buried lede: Apple has patched that particular exploit [1] and everyone should download iOS14.8 now if you want to be protected (no doubt NSO has other tricks up their sleeve).
Edit: Just realized it also impacts macOS and watchOS as well which were also patched. Patch Monday!
Sounds like the buried lede here is that the biggest company in the world is having it's products actively being interfered with by a small shed in Israel run by war criminals. Presumably in this world in 2021 we have mechanisms other than finding their digital fingerprints to stop that.
And they don't work out of a small shed. But the metaphor is not that bad, those guys walk on the edge of the law probably passing to the wrong side more than once but never being caught
What about "Don't use Apple products"? I know that Android is just as bad in many ways...
And if all options in the modern tech industry basket of choice are terrible, well... humanity survived without them for an awfully long time.
I've gone back to a flip phone from an iPhone. I no longer use Windows if I can at all avoid it (there exist a few sysadmin tasks involving netbooting Mikrotik devices for major OS updates that are far less painful on Windows than other OSes), and have no plans to let Win11 in my life. And Apple is heading out the door too. Throw in my dislike of Intel, and... yeah, it's getting pretty thin pickings. I still have an iPad with no accounts on it as a PDF reader, but I'd like to replace that with something else (Remarkable or such).
"Agh, this is soooo terrible, but I'm going to keep using it!" just means, in practice, it's not that terrible.
> "Agh, this is soooo terrible, but I'm going to keep using it!" just means, in practice, it's not that terrible.
I don't think this is the only conclusion here.
I think we should acknowledge just how central personal computing devices are in society in 2021. Sure, it's true that humanity survived without them, but at that time, societal norms were drastically different. Removing tech from daily life today can be crippling, and that's part of what makes some of these issues so terrible. They directly threaten our daily lives.
I'd argue that it's possible for the thing to be "very terrible", and to conclude that it's still your only option to continue using the Apple/Google ecosystem.
- Not all users have the financial means to switch. The iPhone they own is the one phone they'll buy for the next 3-4 years.
- A growing number of users have only an iDevice and no standalone PC. Couple this with #1, and things get even more difficult.
- The utility afforded by the Apple ecosystem is high enough (or virtually required depending on one's job) that it outweighs the current set of downsides.
If a corner store owner pays a weekly fee to the local gang "for protection", it doesn't necessarily follow that because the owner chooses to pay the fee, the extortion must not be soooo terrible.
Good for you, not good for %99.99 of the population. For nation states, that is mission accomplished! You never get %100 compliance with anything with large numbers.
Both Apple and Google scan your cloud synced files. Neither of them claim to scan your local only files. So the choice is rather pointless as they both hold the same position.
What do you mean by this? I use Google photos on my iphone and it seems to work perfectly fine. I'm assuming you are talking about background sync but I just checked via the web version and my photos from yesterday are all there so it seems that background activity is allowed while plugged in since I have not opened the g photos app in a while.
The irony is that if you’re not updated to the latest iOS, the easier (cheaper?) it is for the CCP to run surveillance exploits on your device a la the Uighurs.
You can either trust Apple, or lose all security updates.
> In March 2021, we examined the phone of a Saudi activist who has chosen to remain anonymous, and determined that they had been hacked with NSO Group’s Pegasus spyware. During the course of the analysis we obtained an iTunes backup of the device.
...
> Citizen Lab forwarded the artifacts to Apple on Tuesday, September 7. On Monday, September 13, Apple confirmed that the files included a zero-day exploit against iOS and MacOS.
In short: Just because they got access to the phone in March doesn't mean that they were already aware of the zero-day exploit back then. Finding this kind of stuff takes a while.
We (the public) have known about FORCEDENTRY for 6 months. That time was spent analyzing and understanding the exploit. It does seem like a long time for such a public zero-click affecting 100s of millions of users.
I once worked in a 'dissident' org (supported by the US Agency for International Development) - these orgs were fighting for human rights in their countries. In one extreme case/country, my prospective project team mate, no one knew her real name (came to know this later), though she was our colleague, was quite social and pleasant. In her country's expatriate circles in DC, she was worried about foreign spies. Family back home is at risk, and so is she, even if she lives in DC. These are brave people.
She wanted to build a database of something, and we were like, "keep your phone in another room" if you want to come discuss. Something that I am not sure she practices but more people need to practice.
CitizenLab is doing yeoman's service for people's rights to privacy and human rights. They're heroes.
I'm glad you put "dissident" in quotes. USAID is notoriously rife with CIA plants and many CIA operatives use the organization as cover, which implies that a nation targeting its members would have a lot more justification than a homegrown activist. That USAID might be targeted by hackers is mostly a consequence of the US government's decision to use it as a front for clandestine operations overseas.
Regardless, they do help real dissidents. People, who are at risk in their home country as they are perceived to be a threat to their authoritarian government.
I can't find any 'dissident USAID' outfit concerned about the fate of Julian Assange or Edward Snowden, however. Seems like 'human rights concerns' are highly conditional on the amount of money an repressive government invests in Wall Street.
> supported by the US Agency for International Development
Isn’t it more usual for the NED to do such things? I remark upon this because it occurs to me that using USAID to do politics might make recipients suspicious of aid even when it’s both necessary from a humanitarian perspective and unlikely to threaten the ruling dispensation in the recipient country. (This is a separate question from whether the NED/US government as a whole should even involve itself in such matters, to which my answer is ‘maybe’, since the dubious stuff probably happens anyway and lots of these civil society organisations &c. actually do good work [e.g. the The
Assistance Association for Political Prisoners in Burma.])
True.. I was slightly inaccurate, This org. had various USG funders, with a large slice of funding from US-AID projects. Washington is full of these 'USAID contractors', some tiny others mega-sized. But this project may have been funded by a division of US Dept of State that is focused on Human Rights - DRL. Not sure where the lines are about which one US-Aid gets and which one is State. For example development of journalism in an emerging country would be US-Aid. But OTOH, a project promoting free elections in the same country could be State. Not sure.
In any case, they span the range from benign to hostile nations, with varying risks attached. The "About" page for many such sensitive orgs would be silent on who the team was, except if it was Americans (like me) who didn't mind being their name out there (or nervously okayed the name being public).
It seems like the NSO group is some kind of Hydra where every time their exploits are thwarted they find 2 new ones. The difference is that Hydras go for demigods while NSO products target civil servants and minorities.
> Despite the [gif] extension, the file was actually a 748-byte Adobe PSD file.
I wish programmers would stop "helpfully parsing" files which are named with an "incorrect" extension. If a random unknown person sends me a file with .gif extension that is actually a PSD file, I most definitely do not want my machine parsing whatever that thing is.
Discourse avatars point to a page with a .png extension regardless of what the actual file is (jpg, gif, or svg). Parsing file headers should not be a dangerous operation and in my opinion is the right thing to do.
You joke, but maybe one way to fight this proactively is with fake activist honeypots. Apple, a company with the size and budget of a nation state could certainly pull off such an operation for the security of the devices they sell, but obviously, and maybe unfortunately, this would never happen.
I miss the days when iOS exploits were merely used for jailbreaks and allowing alternative app stores, instead of being weaponized/monetized as they are now.
Image and video decoders seem like exactly the right target for formally verified (i.e. proven) implementations. There are just so many moving parts, and libraries get re-used in many projects, rarely forming the 'special sauce' in any given app.
I have been keeping an eye on the work done by what is now called Project Everest[1] over the years in the communication and cryptographic space.
Is there similar work in the image and video decode space? My seach fu is not yeilding anything beyond some hardware decoding proofs.
Though it's worth noting that the cost of Stagefright was surprisingly low - it took a long time for a good ASLR bypass to come out for it and by that time most devices were updated or replaced. Additionally, the sheer variance between Android devices means developing worm-level exploits becomes extremely difficult compared to something where everyone's running the exact same binary like Windows, so it likely only saw targeted use.
The NY Times [1] just reported that "Apple’s security team has been working around the clock to develop a fix since Tuesday, after researchers at Citizen Lab, a cybersecurity watchdog organization at the University of Toronto, discovered that a Saudi activist’s iPhone had been infected with spyware from NSO Group."
What took so long? Did Apple not know about this in March or was someone sitting on it for 6 months?
“Citizen Lab forwarded the artifacts to Apple on Tuesday September 7.” — from article, no need to jump to unwarranted conclusions about Apple. “In March 2021, we examined the phone of a Saudi activist” - it would be interesting to know the reason why Citizen Lab delayed so long. Hopefully they just wanted time to discover who else was being targeted?
> In March 2021, we examined the phone of a Saudi activist who has chosen to remain anonymous, and determined that they had been hacked with NSO Group’s Pegasus spyware. During the course of the analysis we obtained an iTunes backup of the device.
> Recent re-analysis of the backup yielded several files with the “.gif” extension in Library/SMS/Attachments that we determined were sent to the phone immediately before it was hacked with NSO Group’s Pegasus spyware.
Seems like they originally examined the phone in March, but recently did another analysis, during the course of which they discovered the exploit and reported it to Apple.
I assume it takes time to go from "this person could have potentially been targeted with Pegasus" to "this person's iPhone was exploited by Pegasus, and here is how they did it."
PDF is basically a programming language, so instead of sending image data you send a program which is interpreted by the PDF reader to render an image on the client. That makes it really hard to secure completely.
Can someone explain like I'm 5 why it's so hard to prevent this?
I mean with a messenger app, you know you're getting some payload of data from a specific place, that goes through your own server, and is only ever going to be text or picture or video.
Why can't that be sufficiently sanitised en route and as it arrives to not have this kind of thing happen all the time?
Because the OS is too complicated, imessage is a legacy app which is deeply embeded in the OS. And often the exploits are in things like the notifications but imessage is the easiest way to deliver the data to any ios user.
And people will flame me for this, but part of it is because the language iOS is written in allows these exploits to slip in easily and all over the place and the difficulty of stopping it is too great. There is a good reason Google has started migrating core components of Android to Rust and that the Google security team is pushing the effort for rust in Linux.
The surface area for bugs becomes so much smaller when you can have a compiler eliminate whole classes of bugs.
The language might help the developers mitigate some issues but it is definitely not a solution the bad legacy code, mediocre development and test processes etc.
It isn't a panacea. Switching to Swift or Rust wouldn't prevent all vulns. But it would improve things. Modern code, strong developers, rigorous testing, static analysis, and fuzzing all make things better but they still consistently fail to enable developers to produce programs free of memory errors. This is true even for applications that have absolutely world class people doing these things.
You need all of it. Language safety is only part of the path forward, but it is an essential part.
TBH you need to combine both, NASA uses safer languages and strict processes for example. Simply moving to another language might mitigate some possible issues but will definitely won't solve everything
End to end encryption. The server doesn't get to see the data, so there is no chance to analyze/filter it on the way. All the parsing and sandboxing has to happen on user devices... and there's always one more bug left to exploit there, especially in a legacy codebase like iMessage.
This is one of the unfortunate downsides of E2EE: there is no way to do server side security on message contents, so you rely entirely on endpoint security. For a non-E2EE service it would be trivial to scan for, collect, and more easily block exploitation attempts.
The relevant applications are written in memory-unsafe languages. For an application of meaningful complexity, it is virtually impossible to actually write a safe program in C or C++ and even more impossible to maintain that safety. The code doing the sanitization is itself attackable and the process of sanitizing complex media is very complicated.
Great question. Every cell carrier processes images before delivering to the recipient. Like, if you send 3 or more photos almost every cell carrier will downsize the images. While this isn't an extensive test, I just tried renaming a PDF to a GIF and it failed to send on Google Voice and T-Mobile.
Recently my iPhone started rebooting itself occasionally and randomly. I've been a long-term iPhone user and never seen this behaviour before on previous or current device.
I'm not one to wear a tin-foil hat, but I have to admit NSO did come to mind.
My mom’s iPad was doing the same thing for a long time and I suspected hardware failure (it was getting kinda old), so I told her to take it into the Apple store for diagnosis and repair. It turned out that the iOS install was just corrupted by bit flips and the Apple employee did a factory reset and it was all good afterwards. There’s many things that can go wrong with even modern computers that aren’t exploit related
The worst part is it is all just too complicated to work out why. My desktop seems to freeze and full crash once every few days and I have no idea why or even how to work out. Since it is custom built, I can't just take it to the apple store and say I want a new one.
It usually is the RAM that has developed a fault. Run memtest[0] to detect errors. Otherwise, in descending order of likelihood, it could be the motherboard, psu, some driver/kernel crash caused by peripheral, or bad cpu.
My fiancée already received a large group iMessage that had 40+ unknown numbers on it that all shared the same area code and first 3 digits of the phone number. The contents of the message was unintelligible (random words).
Kind of interesting Apple reacted as quickly as they did. It usually takes a lot of effort to get Apple to acknowledge anything. Or maybe because they didn’t request a bug bounty?
Apple should know who works for NSO Group. It should block every single account of every single person working for that org. Same goes for their families.
Google should do the same for Android.
You do not fight organizations like that by fighting "organization". You make it very difficult for people who work for those organizations to participate in a society that relies on what they actively work on breaking. In fact, you tell Israeli government that unless they put a leash on its dog and lock it up in its backyard, you will start disabling accounts of every single person in Israeli government. When the government leaders cannot work their iphones, they will ensure that NSO does not touch Apple's products.
Baby hackers that want go working for NSO want to have a high life. Modern high life requires modern communications devices. Blocking them from modern life (for example, vaccine passports done via iPhone and Android) will quickly lower the ranks.
Blocking Israeli government officials from Google and Apple will immediately solve NSO is an Israeli company that is cozy with the government and gets government protection problem.
None of the NSO group's clients would want to pay for it via suitcases of cash. And in either event paying with suitcases of cash creates problems in the modern world for those that receive the suitcases of cash.
there is a wide range of exploit brokers and a decent number of security researchers that choose $ over morality as long as there is demand there will be supply.
> Apple should know who works for NSO Group. It should block every single account of every single person working for that org. Same goes for their families.
NSO are suing Facebook - successfully so far - to force them to allow NSO staff access to Facebook when FB responded to NSO attacks by doing just that.
Facebook was suing NSO about the hacks that NSO carried out.
In this case Facebook, Apple, Google, etc should simply terminate the accounts exercising "we are deplatforming you. No explanation" option they all have.
NSO Group operates at the pleasure of Israel. If Israel says "jump" NSO Group is going to respond with "Would you like us to have our tongues out while we do it?"
The simple solution for those who are concerned about this is to use a dumbphone. It's simple, easily hackable, and most importantly does not promote a false sense of security.
I believe it is completely wrong to believe that software can be made secure. It is inherently unsafe, by nature. I believe every internet facing computer should be partitioned in two virtual machines, one that connects to the outside world, and another that contains user data. Processes in the user data partition shouldn't be allowed to connect to the internet.
The result of current design is that I have practically lost the right to write and use potentially unsafe software, even if I wrote it - something I may want to do for performance or practical reasons.
Testing is hard. You need someone to understand the system. The problem with SW testing is that is mainly used to verify requirements, not to find defects. Also layers over layers of libraries does not help either (look how cool i am: instead of using libjpeg, libxpm and libpng i use imlib which links to those and introduces also its own bugs).
So I have to update to protect my self from Pegasus/NSO and in the meantime to install next beta of CSAM scanner.
Hmm. No. I Deleted all my apps and photos, using it as a phone and banking app terminal. Phone calls metadata is collected by governments by default, so I have no problem with this. I have nothing to hide, and nothing to store on Apple devices.
Someone more paranoid than me, told me outrageous theory. Apple want's to take part of Pegasus spyware like market by providing a legal and user approved backdoor for governments trough CSAM.
I don't believe it at all.:)
Don't underestimate the value of privacy. How much (or little) you have to hide is something worth hiding. It's what you do and don't know, do and don't say, do and don't communicate with, this is all important to keep private by default.
There's a tendency for individuals to assume the role of would-be criminal in these discussions. It's more correct to assume criminals exist on all sides, do you have any interest in enabling a corrupt government to surveil its law-abiding citizens? When you don't have privacy, you enable potential criminals in power to see if the populace is aware of their actions, or absolutely distracted by instagram. We're all potential witnesses to crimes, and at this point it's exceedingly likely we'd communicate those observations via smartphones. We all require privacy and secure communications, full stop.
This line of thinking is predicated on two assumptions:
1) That the local authorities are essentially malevolent
2) That it is only the individual's (privacy/security) measures that are deterring the malevolent authority from exploiting them
For most Americans/Europeans, both of these assumptions are false and based in paranoid fantasies. Local authorities are rarely malevolent (though they may commonly be corrupt and excessively self-interested and not care about you), and it is virtually impossible for the average citizen to mount a home defense (real or cyber) against a committed state actor, or even local PD. It's like trying to secure a VM guest from access by the host machine; you're completely surrounded.
I fully support protecting yourself & your privacy against petty criminals, but unilaterally taking on your government is frankly just a waste of life.
It is sacracstic coment depicting the general state of things.
Normalization of surveillance and acceptance of this "new world" from the genereal public trough manufactured consent by the corporations, media and governments is staggeringly fast.
There is not subsitution for privacy, whatever the percieved motivation for "common good" is bringing to the table.
My personal decision is to avoid the surrveilance state by using FOSS solutions and abandon smartphone habbits.
There must be a place for design and software solutions outside the "status quo". Started this year by removing Apple from my business and moving along to educate my customers of incomming dangers for their businesess and personal life.
At tnis point in time I would not believe anything Apple is saying. After all backslash they just postponed it, to make it better and to avoid negative PR for the new iPhone.
What a response.
Apple moaned about privacy with years. Not me.
Apple used big billboards all over the world with clear privacy message.
Only to create biggest intrusion on user space circumventing 4th amendment by introducing scanning by third party private corporation criteria (funded by DOJ). Creating a precedent in which all governments will be able to snoop and classify.
And I am "moaning". GTFO.