> We want to thank Citizen Lab for sharing a sample of the FORCEDENTRY exploit with us, and Apple’s Security Engineering and Architecture (SEAR) group for collaborating with us on the technical analysis.
This reminded me that NSO went after Citizen Lab on multiple fronts. They even tried to use a spy to talk to JSR (https://www.johnscottrailton.com) and make him say controversial things, which could be later used to malign Citizen Lab. Darknet Diaries covered this incident recently: https://darknetdiaries.com/episode/99/
The transcript is such an intriguing read. You don't expect these things to happen in real life, but yet here they are, tie-cameras, pen recorders, driving circles around the block and all.
Darknet Diaries is so good. To anyone who hasn't listened, highly recommend. Jack hits a homerun each week and the story about JSR and NSO was buck wild
Jack is awesome. Btw, I must give credit to many in the HN community who have recommended this podcast so many time in various threads. On my own I would have never found it.
A good podcast app may help. For slower podcasts, instead of speeding it up - which may chipmunk or otherwise cause artifacts - podcast addict trims silence. You may miss stuff below the gate threshold, but it is still slick.
I only listen to 2 podcasts, I don't really and have never really liked the format. I prefer reading with music or silence, and have always loved weblogs for that. However, it's worth a shot to find a podcast app that "clicks" with your style of input.
With podcasting 2.0, the new features should allow producers that don't want to be part of an ecosystem to still make some income, plus you get streaming album art where supported, and liner notes and all sorts of other goodies.
I think podcastindex has a list of compatible 2.0 podcasting apps, but I'm unable to check.
yeah it really is refreshing to hear a podcast for the masses that actually gets technical details correct. It's a pet peeve when people simplify it and get it really wrong.
I'd assume they're using the Erik Prince/Constellis business model, taking some time off and getting the band back together under a different name to do the same work.
Yep. The way I think about it in mainstream terms is, "Jeffrey Epstein's clients didn't just stop wanting what he was providing, someone took his place. Who is that?"
This is mind boggling. NSO used a compression format's instructions to create logic gates and then from there "a small computer architecture with features such as registers and a full 64-bit adder and comparator which they use to search memory and perform arithmetic operations", all within a single pass of decompression. Combine this with a buffer overflow and you've got your sploit.
"I found a 3rd party library that uses eval, so we just send it code we want to run and...boom. We're in."
"I found a popular chat app that after install leaves a tool with full sudo privileages behind for us to take advantage of located clickityclickity... here. We're in."
Sometimes, it can be even more pedestrian sounding. Hackers don't always have to be clever if other people are absolutely dumbasses before their arrival.
To be clear, what this exploits is nothing like what you've mentioned.
The article does a very good job of describing the relevant parts of the image format. They built a VM inside of an images single pass decompression route. I'd highly recommend reading the article.
This is just one of the exploits in a very large chain.
To quote some of the nations top security researchers:
> Based on our research and findings, we assess this to be one of the most technically sophisticated exploits we've ever seen, further demonstrating that the capabilities NSO provides rival those previously thought to be accessible to only a handful of nation states.
Yeah. Even I know about eval. I'm just happy Google and Apple actually care about security unlike the 2000s companies and can rival the smartest hackers to keep my phone safe!
I'm thinking you're missing the larger idea. The whole point is that while these "geniuses" did something really "impressive" and difficult, there are just as really not-impressive and not-difficult things found in the wild that have caused problems as well.
It's called counterpoint. It was actually found interesting by several people, but you can have your opinion that you don't find it intersting. It actually did add to the conversation as there were multiply replies to it. Your comment about it is the thing that doesnt really add to anything.
Joking aside, this does illustrate the "magical" properties of technology to the layperson. As a corollary, failure modes end up quite suprising and hard to reason about without a certain amount of proficiency in these technologies.
I've seen some examples of this. It's very clearly trained on a white-male dataset.
I've also seen it "enhance" an image of a resistor into a human face.
I don't care how much AI you have, you can't add back data that wasn't in the original image. The best you can hope to do is get a vague approximation, and you must have a very, very good (comprehensive) training dataset for that to be remotely viable.
The premise of the technology is not adding more information to the image. But rather realizing that the image may have a description that is a lot smaller than its file size suggests; then it becomes a matter of rendering it using world-aware encodings. The resolution may appear higher but it is actually a filtration of the original data. And there’s nothing to say that simply because the current technology is overfitted to their present-day datasets, that such a filter (that is actually useful for common images, or enhancement by leveraging known/ few-shot other examples consisting of the same target object) cannot exist.
There is a world of difference upscaling something digital, and something analog. 16mm film actually does contain more information than could be shown with the original film. We have better scanning techniques today that can extract that information.
Upscaling something digital, does require creating information out of thin air, on the other hand.
Well, that and the explanation is missing the details. Conceptually being able to construct something like that from XOR and NOT primitives is stuff from undergrad computer engineering curriculum. But it's certainly a respectable feat to find this combination of compression format and the vulnerability therein of all the supported formats, and think to apply it like this.
Its amazing how they took a buffer overflow and ran with it to create a whole turing complete machine. Its mind boggling how complex these exploits can be, no wonder they sell for millions
> Historically the jump from overflow to RCE was much much shorter.
Not really. I am about to read the article, but it sounds like return-oriented programming[1] chaining "gadgets" that are small bits of existing code that you can re-purpose into executing arbitrary code by manipulating the stack. Extremely common exploitation technique, even if not trivial. Who said an exploit or RCE was trivial to exploit?
Edit: I was a bit quick to dismiss. The technique is certainly interesting, although the article doesn't go into the details of how the control flow is handled and where that register is stored. However, I'd like to point out that ROP is quite complex on its own, as it's kind of like using a computer with an arbitrary instruction set that you have to combine to create higher-level functions, hence my original confusion.
I think what he means with historically is before ASLR, DEP, and other mitigations, where a buffer overflow meant you can simply overwrite the return pointer at ESP, jump to the stack and run any shellcode. Mitigations have made exploitation much, much more complex nowadays. See for example https://github.com/stong/how-to-exploit-a-double-free
Exactly. This escape is technically quite cool frankly in terms of some creativity.
That said, my own view is that messages from untrusted contacts should be straight ascii, parsed in a memory safe language with no further features until you interact (ie, write back etc).
It's this attitude that is diminishing our security posture. Users want gifs, they want shared locations, they want heart emojies, they want unicode.
The fact that you force EVERY user you interact with to have them same treatment is the problem. Some people, I left into my house unsupervised. Some as guests. Some I don't let in at all.
We need to start modeling this approach online more.
I don't think you understand how far users will go to work around safeguards if they interfere with their daily life.
ROP chains are similar in spirit but typically created by hand and thus not all that long (several dozen steps, at most). Creating a 70,000 step program via a Turing tarpit is very interesting.
My initial assumption was that they would compile a program, take the binary output as an image and JBIG2-compress it, as I don't really get how they would use the result of the binary operations to branch to different code. Reading the article a bit more, I think they can loop multiple times over the area, by changing w, h and line dynamically over each pass, which would give them some kind of basic computer. That part is still unclear to me, but that would indeed be a lot more impressive.
There are no details on how control flow is handed over to the program either, so it's possible that they loop multiple time over the scratchpad (1 loop = 1 clock cycle roughly), especially if the memory area is non-executable, and they have one shot at computing a jump pointer.
In any case, they can probably copy arbitrary memory addresses into the new "scratchpad" area to defeat ASLR (we'll see in part 2).
iOS does not allow the modification or generation of new executable code (at least, it will not at this stage of an exploit). So they are likely creating a weird machine to patch various data and then redirecting control flow with the altered state by overwriting a function pointer.
> […] then redirecting control flow with the altered state by overwriting a function pointer.
The analysis calls this out specifically:
> Conveniently since JBIG2Bitmap inherits from JBIG2Segment the seg->getType() virtual call succeed even on devices where Pointer Authentication is enabled
Which is disturbing. Was the code compiled for the arch64e architecture in the first place, or it is a bug in the LLVM compiler toolchain? The armv8.3 authenticated pointers have been invented to preclude this from happening, but that is not the case with the exploit.
Pointer authentication cannot protect against all pointer substitutions, because doing so to arbitrary C++ code would violate language guarantees. https://github.com/apple/llvm-project/blob/next/clang/docs/P... is a good overview of which things can and can’t be signed because of standards compliance.
Right, and they get there of a decomp pass on totally untrusted input over the network. This is why it's so crazy that apple has this huge attack surface.
My own suggestion. Ascii only messages if contact is not in address book or is a contact you've communicated with in your message history (however long you keep that) up to 1 year. Once you reply these untrusted saudi contacts can send you the gif meme's.
The "police" already email and call me about my overdue IRS bill and my imminent arrest. I ignore all that crap.
Never interacted, maybe ascii only. Interacted, allow unicode and some other features (basic emojies? / photos?). Full contact? Allow the app integrations, heart sensor, animated images, videos etc.
I wonder how they test the code? Maybe they can write a meta VM using a testable environment(e.g. in C) and transpile it into the instructions that library uses?
If I was them I’d test each part of the toolchain (which I assume is a high-level compiler of some sort to their RISC VM) independently, as you would for any component of this type. For the actual exploits itself it’s probably a regular debugger with facilities tailored to their VM.
Indeed amazing and also very well written article.
I wonder how much time it took to develop, I assume the whole general programming language from NAND gates is not something they had to come up with from scratch.
Putting the pieces together though, that's a work of art
They probably defined some microcode operations -> created a minimal assembly language -> wrote it in C -> hand-optimized the asm output -> compiled to "machine" code
All the steps are things you cover in a computer engineering degree (I think), but putting them all together in a tightly constrained environment (or even recognizing that the exploit can happen in the first place) takes a ton of skill, resources, and dedication.
I was confused about how they got the thing to run for an unbounded amount of time, but I guess they probably have the final operation at the end of a "processor cycle" be to overwrite the next SegRef so that it loops back to the current SegRef.
I'd love to see the thing in more detail - what the shellcode looks like, how the CPU was designed, everything.
a scummy company but such transcendental brilliance..
If you are Wrangling Untrusted File Formats, you should be doing so Safely, using WUFFS.
You can't make this mistake in WUFFS. Your WUFFS image decoder might decode the image incorrectly, maybe Rudolph has a green or blue nose, maybe he's upside down or just a sea of noise, but it can't have a buffer overflow even if you screwed up really badly.
For example, any equivalent of the repeated addition numSyms += ((JBIG2SymbolDict *)seg)->getSize(); in WUFFS will get flagged, it clearly could overflow and WUFFS wants you to write code explaining how you're going to prevent that because overflows aren't allowed in WUFFS.
This leaves outfits like NSO with nothing much to attack. Sending me pictures of Rudolph with a green nose by "exploiting" a bug in my image decoder isn't very useful, unlike taking over my phone...
this wasn't Turing-complete until they exploited it to make it so. JBIG2 executes arbitrary binary bitmap operations, but sequentially (no looping.) using the exploit they presumably found a way to send it into a loop, probably by overwriting the pointer to the next segment or something.
theoretically I guess you don't need that, but you'd have to send a payload linear in size to the number of cycles expected to run the shellcode, and that wouldn't lend itself to a processor-like design - it'd just be too big.
This reminds me of the original story of Mel in which Mel managed to do similar things with assembly. Amazing stuffs and wish I had a chance to work with similar genius.
It's a real shame that the people who came up with this exploit are working for NSO and not on solving P = NP or something. I'm sure if we got them and the ones working on crypto at NSA in a room together, we'd have it and clean unlimited energy in a week.
I often feel sad thinking about how many brilliant engineers are dedicating their time to helping governments spy on people or other governments.
They don’t really have to, they can just mine Bitcoin by reversing SHA256 in polynomial time, inspect https messages to banks, or send Bitcoin to themselves by creating an ECDSA signature… or just set up a software as a service and have the biggest business in the world.
Heck, even if P=NP meant there were fast solutions, merely proving that P=NP wouldn't necessarily give you those solutions, and they might turn out to be even harder problems!
kinda like Werner von Braun, maybe. he just wanted to make rockets. whether they were for Nazi Germany or the US didn't matter, whether they were missiles or spacecraft didn't matter, he just wanted to build them.
There’s nothing unethical about a scientist working on weapon development for their country in the middle of a war. Imagine it’s 1935 and you lack the modern perspective. I mean you might not like it, but I don’t think there’s an ethical violation here.
I'm not sure what world you live in where saying "I don't care if I'm making missiles for Germany or the USA" is not an ethical violation and is somehow patriotism... but I don't want to live in it with you.
You’re being unfair by enforcing a modern perspective, shaped by the victors, on an enemy from 80 years ago. Were Soviet weapons scientists unethical too?
I think it's barbaric to calmly send a request for more slaves (Jews other minorities) for the factory work, due to them inconveniently dying too fast. Because of the inhuman working conditions. I read that he did that all. He wasn't merely a patriotic, unaffected scientist, he played a part in the holocaust.
You’re moving the goalposts - the subject discussed was the participation of scientists in weapon programs. He might have been a scum (I think it’s trivializing what happened in Germany at that time, I digress), but his participation in the war effort is not unethical in itself.
Thousands of kilometers away, other scientists toiled on a weapon that makes all the weapons the Nazis developed seem benign.
I think it's beyond barbaric to calmly send a request for more slaves (Jews other minorities) for the factory work, due to them inconveniently dying too fast. Because of the inhuman working conditions. I read that he did that all. He wasn't merely a patriotic, unaffected scientist, he played a part in the holocaust.
I think you could also say the same about gambling, porn and other questionable industries.
The thing is, it's usually much easier making money off these things then making money from solving impactful problems.
If you're a regular joe and you could spend your next 5 years with a 100% chance of making millions for finding exploits, or a 0.01% chance of solving P=NP, I think the irrational decision would be picking the latter.
I do know of several founders whose first company was a nasty ad-tech company (spyware), and after making their millions, their second company is a much more honorable digital health company.
You probably can find examples where such people can keep on creating nasty companies, so it would be interesting to see if there was a research about whether or not people pursue more honorable goals after they get lots of cash.
As a side note I just went to the Cloudera website, because I did not know about the company.
After selecting "Reject all" in the cookie dialog, the cookie was literally spinning (they have a spinning wheel animation for processing your cookie response!) for >5s on "We are processing your cookie settings request". If this is what the best minds of our generation are achieving then help us god!
It's not really true, this was true 15 years ago when the smartest people worked at Google (whose goal is to get people to click on ads), but now a lot of very smart people found other business.
Why do you think some random hacker is smarter than all the academics we have? Somehow clean unlimited energy isn't achieved because people are working on exploits or optimizing ad revenue? I doubt it.
I feel the opposite. All this stuff and even more hardcore crypto stuff is all relatively simple math. It's not even close to comparable to the things mathematicians do. Or even what physicist have achieved with LHC or fusion research.
Surely cracking cryptographic algorithms is pretty hard math given people don't have that much success with it, even with a huge incentive (decrypting all communications worldwide)?
It's considered to be impossible. I doubt there is much (if any) serious research going on to mathematically crack RSA or ECC. Besides that is not what OP was talking about. That was about hackers finding standard vulnerabilities in code and exploiting it. Not about any mathematical flaws in crypto.
Well for one, the safety of encryption rests on certain problems being intractable. (In a theoretical sense; there are always implementation bugs that destroy security).
If P=NP, then those previously thought to be intractable problems, are actually tractable. And the foundation of a lot of security-related engineering collapses.
Is proving P = NP equivalent to knowing how any intractable problem can be solved? Is it possible for P=NP and yet a class of intractable problems to remain unsolved?
It would mean that a large class of problems that have solutions that can be verified quickly can be solved quickly. Which cuts both ways.
While that means most protocols used for cryptography would need to be replaced (hashing, digital signatures, etc) it also means other combinatorics algorithms (traveling salesman, protein structure prediction) would become solvable which may been boon for logistics and/or computational science.
(I think this is correct) If P=NP there will still be intractable problems; they would be ones where the solutions can't be verified in polynomial time... along the lines of verifying the solution is correct is as complicated as brute forcing the solution.
Note: it's been a while since my computation theory class. ;) I am reading over https://en.wikipedia.org/wiki/P_versus_NP_problem and relearning the fine house of cards theoreticians have divided this problem into. There is a "consequences of P=NP" towards the bottom that sums it up better than I can.
'quickly' is doing a lot of work in that sentence.
"A proof that P = NP could have stunning practical consequences if the proof leads to efficient methods for solving some of the important problems in NP."
Note the "if". It is extremely important to the meaning. It's very possible for P to equal NP, but for that "if" to be false.
There's also the chance that while we may be able to come up with a polynomial algorithm for integer factorization, it's not actually practical to run still. Remember computational complexity discards the constants on that polynomial. Practically speaking x^2 + x is a lot different from 2^64x^2 + 2^32x + 2^16 :)
Among other things, mostly encryption. Most of our current methods depends on P != NP. So no need for 0 days if you can just read at encrypted data as if it wasn't.
Those engineering problems are trivial compared to many real problems. Turning out all those engineers to work on say cancer wouldn't necessarily result in any new breakthroughs . Case in point: all the brilliant software engineers who thought they could solve Covid (https://www.protocol.com/Newsletters/pipeline/very-venture-c...) only to find themselves out of their depth. The software engineering approach doesn't translate to all things and can even be harmful in some fields (cough Theranos). Physicists are another group that tend to have this conceit -i.e if they weren't so busy solving physics problems they would solve the economy and world peace
I have suspected for awhile now that the bitcoin blockchain is actually an attempt to break SHA-256. Bitcoin is built around incentives, and it has created an incentive for people all over the world to basically brute force this algorithm and maintain a recursive set of low entropy outputs.
Which would make the btc blockchain an incredibly expensive and valuable data set, for someone armed with the right mathematical theory.
Part of Security is knowing your adversaries power. I think you might be on the right track. You can’t put a price on the security of an algorithm. However, Satoshi gave us a very good metric for calculating it.
I don't see anything fundamentally novel here, other than we're not going to be just laughing at weird things that turn out to be Turing complete, they're all practical intrusion vectors now.
NSO get way too much credit/dramatization these days. They are mostly 2 things
* a shiny UI for customers
* a bank of 0-days
Those 0-days could be found in house, could be brought in from a new employee copying a previous employer, or could simply be purchased.
Most people in the IDF understand when a great security researcher leaves 8200, the company they move to will probably have some of their secrets, theres really no way to stop a 0day from leaking from a researcher like that
This exploit has been closed, but we haven't heard anything about Pegasus not working anymore, so i'm just assuming they moved on to the next exploit. Previously there was a big Whatsapp exploit FB closed that had them hurting. I'm sure they always have multiple backups for when this happens
There is, and has always been, a 7 figure market for high quality 0days. Hell, maybe its 8 figures these days. NSO is just "in your face" which makes people angry
NSO was caught, and thats why Google is crediting them. But this same exploit could have been heavily used by 8200/NSA/who knows who else
This has already allegedly happened to Bezos (attacked by Saudi Arabia IIRC, which is an NSO customer). This was likely over his ownership of Washington Post and the reporting on the killing of Kashoggi.
Yeah, billionaires and Trillion-dollar company CxOs have to step up their electronic security
Those conversations are important for a CEO like Bezos though. Let's not pretend MBS is not mega powerful. I would say those type of relationships for multi nationals is probably a bigger value/part of the ceo work than like micro managing teams.
But it's super stupid he had just one phone combining personal, business, more private business. At least from the reporting that's what it sounds like happened.
Even the dumb ass Jan 6th coordinators and Meadows used burner phones. IIRC standard practice for political 'execs'/important leg committee staff.
The researcher that leaves the military takes with them general skills in reverse engineering and exploit development, but they cannot use specific 0days they know about from their military service. The specifics of everything done in the military is classified. People told me they couldn't mention in job interviews some of the skills they have because it's a secret. Like, if someone developed this Turing complete architecture on top of jbig2 decompression while they were in the military, it would be considered a secret that cannot be revealed.
> They cannot use specific 0days they know about from their military service
Of course they can, it is just illegal and might be classed as treason or similar.
Remember we are talking about getting exploits for nation states here rather than just some regular company - hiring spies is part of standard operations for the intelligence community and would be a valid zero-day acquisition strategy (depending on the protection offered for NSO by Israel).
> further demonstrating that the capabilities NSO provides rival those previously thought to be accessible to only a handful of nation states
I mean the whole “nation state” or “nation state backed” hackers thing was always a liiiiitle (very) ambiguous right?
Does the evidence really even move the goal post or mitigate the convenient scapegoating?
Politicians and CEOs and certified IT professionals are all incentivized to say “it was a nation state there’s nothing we could have done!” and rely on their sycophants to never question it, instead of “we’re incompetent and powerless towards random teenagers who rented a rootkit before renting a compromised windows machine that happened to be located in russia”
It's a useful distinction because it clarifies your threat model: any attempts at security without a threat model is hokum, IMO. It's good to know the limits of your security stance by modeling how many resources your opponent can muster, and how many you can spare to defend yourself.
The resources required to develop these exploits (and mitigate against them), were at least an order of magnitude above the next tier, because there was very little sharing and reuse (except among allies). Now, thanks to NSO, any backwater tinpot dictatorship that can't provide reliable electricity or offer a coherent policy for longer than a few months at a time qualifies as a "nation-state" (i.e. hack anyone in the world), if they can spare a 6 or 7 digit budget to hire exploits.
What NSO/HackingTeam and similar offensive security companies did was to lower the bar on nation-state capabilities by removing the need to develop a local program over many years, and allowing the reuse of infrastructure, personnel and exploits by countries that aren't allies. Call it a SpaceX for hacking as opposed to space launches.
I think it is worthwhile to distinguish the two, and I think generally speaking it's the use of bespoke 0days that separates nation state attackers from all others.
One can't really arrange the funding of computer scientists/mathematicians working full-time on the thankless job of finding vulnerabilities without nation-state kind of money, as opposed to employing known vulnerabilities which carry lesser chance of success and greater chance of blowback in their execution.
So nations states as clients isnt the same as being state sponsored or backed, a nation state as a former employer isnt the same either
But ultimately I’m not sure the distinction matters if the main result is that hackers get away unscathed and the victims just deflect attention to the wrong targets
Blaming entire nations gives domestic justification for retaliation. No point giving up a card when it's handed to you. It is in a government's best interest to exploit every opportunity handed to them -- it's less effort than fabricating a reason when you need it later.
> Based on our research and findings, we assess this to be one of the most technically sophisticated exploits we've ever seen, further demonstrating that the capabilities NSO provides rival those previously thought to be accessible to only a handful of nation states.
> Using over 70,000 segment commands defining logical bit operations, they define a small computer architecture with features such as registers and a full 64-bit adder and comparator which they use to search memory and perform arithmetic operations. It's not as fast as Javascript, but it's fundamentally computationally equivalent.
They create an emulated computer by decompressing an old image format inside a PDF file which has a .gif extension! That is top notch!
And still, in 2021, after so many exploits, realizing the futility of trying to fix these bugs and adding their "blast door" process, some Apple dev calls image parsing code where it doesn't belong. The people that are supposed to maintain the element of the OS that has been abused most by nation states do not know the internal APIs they are working with, even just to display looping GIFs.
This negligence is killing journalists and activists.
That's a fair point, but doesn't change the fact that it remained there after the code review that was surely(?) made after implementing blast door. Especially for image parsing APIs, considering they've always been the brittle part in the iMessage pipeline.
Not sure how Apple dev machines look like, but usually I can find stuff like this with a well written grep or a shell script and my API definition, if I'm not sure that I caught everything while refactoring. The codebase might be massive but the team should be sizeable enough to handle it.
The image library also runs sandboxed, and they found a way to "escape it with ease."
Sandboxes don't reduce the need in having programmers with straight hands. They just trade less safe APIs, for ones deemed more so. You will have as much holes in the sandbox, as much APIs you add to it.
In this case they obviously didn't realize it needed to be called inside the sandbox. That function name really is amazingly misleading about what it will do. Anyone could have made that mistake.
> iMessage has native support for GIF images, the typically small and low quality animated images popular in meme culture. You can send and receive GIFs in iMessage chats and they show up in the chat window. Apple wanted to make those GIFs loop endlessly rather than only play once,
Any chat or message software you want to be REALLY secure should not have support for rich media of any type. I am even suspicious and skeptical that Signal supports embedding animated images.
I can name exploits of this type on desktop PC operating systems going back probably 22-23 years...
I do realize that lack of rich media inline in messages is a non starter for most non-technical consumer end users.
Signal lets you embed animated images but they still won't let you send native resolution images from your phone to someone else. Signal drastically recompresses any image sent. The only end to end encrypted software I know of that allows that is iMessage.
How could Signal recompress images while retaining end-to-end encryption? Wouldn't any "recompression" happen entirely on the client-side, and therefore be fair game for hackers to bypass with their own payloads?
It's done clientside, and you can't remove it (on iOS) because only official Signal-published builds will receive push notifications of new messages from Signal servers (via APNS).
This doesn't apply to a sender of an exploit, but does apply to normal people who wish to send full res images or patch out the DRM in the Signal app.
Not an expert on this so i might be wrong, but pretty sure in homomorphic encryption, you can't run an algorithm that reduces the size of the encrypted payload. Like you could recompress and after decrypting the result is smaller, but that only happens after decrypting.
nope! you can reduce the size. trivial example is just XOR: Enc(A) xor Enc(B) = Enc(A^B).. 2 bits in, one bit out.
what you can't do is implement variable-size compression like Huffman trees. if you think about implementing Huffman as a circuit, you have a fixed length output - the worst-case length. you can't read the output any more than you can read the input, so you don't know how much padding you can throw away. therefore it's useless.
the same principle applies to running any algorithm that has a dynamic computational complexity. so you can run a Turing machine in FHE, but you won't know when it halts.
It's my understanding that the signal client which is sending the image reads the jpg/png/whatever image file from local storage, recompresses it local client side, and then sends the smaller version.
then that offers no security at all, since an attacker could use a hacked client. unless clients also refuse to receive anything but one, very well-validated, format, so that sending anything funky would be futile.
No, just have the server reject anything at the /SendMessage endpoint over a certain size; presumably the client is resizing / recompressing images to hit a specific target.
• Compressing to a file size limit is actually difficult/expensive. Tools usually target some good-enough quality level, and then the file size depends on remaining entropy in the image. The limit would need to be conservatively high.
• Exploits aren't necessarily larger than an average image. Adversaries in this case are quite skilled, and may be able to codegolf it if necessary.
Messages support arbitrary files up 100 MB. Images are resized or compressed for user experience on different devices. The server doesn't know what's in a message.
This is not true. The server in the Signal protocol is responsible for message storage and delivery, just in a way where it's hard to associate individual message payloads with individual users (except by IP address, of course).
As other have commented, this is absolutely mind-bogglingly hard core. Kudos to the NSO group engineers who designed and built this (regardless of your allegiances and whether you like or dislike that they do this and whether it's objectively good or evil or somewhere in between, you have to admit that it's deeply technically impressive).
Does anyone have a sense of who they sold this to and who used this particular 0-click exploit?
My country’s dictator (Viktor Orban) uses it to spy against the opposition and the president to make sure that he keeps control of Hungary. I would give more kudos to NSO if they helped us get rid of corruption in my country.
Sorry but can't agree here - this stuff is proper evil for most of world population, which includes also most of HN readers (no its not just SV and 5 other guys). Its more often than not used to oppress common citizens, freedom thinkers and truth sayers.
They are actively making this world a much worse place long term, and why - pure greed for money and power. They don't even try to act like there is some moral / law filter when choosing their customers.
NSO as company is a highly amoral business too, kind of goes hand in hand.
It does to me, but then I have my own moral values.
How nazis organized jewish extermination of millions in concentration camps might be also amazing from bureaucratic & organizational point of view, yet I completely fail to marvel at such an achievement in efficiency.
> regardless of your allegiances and whether you like or dislike that they do this and whether it's objectively good or evil or somewhere in between, you have to admit that it's deeply technically impressiv
Might as well praise German logistics circa 1940-1945.
"The Wire has confirmed the numbers of at least 40 journalists who were either targets or potential targets for surveillance. Forensic analysis was conducted on the phones of seven journalists, of which five showed traces of a successful infection by Pegasus."
https://thewire.in/rights/project-pegasus-list-of-names-unco...
Also,
"In the midst of the heated West Bengal assembly election, the phone of poll strategist Prashant Kishor was broken into using NSO Group’s Pegasus spyware, according to digital forensics conducted by Amnesty International’s Security Lab and shared with The Wire."
https://thewire.in/government/prashant-kishor-mamata-banerje...
One the one hand you've got people writing insanely complex hacks like this.
On the other hand there's the guy who was doing whatever he wanted for years just by crafting dodgy plist files. https://blog.siguza.net/psychicpaper/
Same class of bug that completely broke Android app signing. In that case it was about ZIP file parsing differences (apps checked at install time by one parser, executed using another parser). And also the same bug class that allowed us to bypass Nintendo's mitigation for the Twilight Hack (Wii Zelda savegame exploit), twice. In that case it was about how they handled the savefile archives slightly differently from existing save data and the game itself.
Having multiple parsers for security-sensitive data that isn't just outright signed at an external layer is always a recipe for disaster.
Yeah, but I also understand that in general parlance VM means a higher level virtualization.
Eventually though you get into one of those annoying simulation vs emulation style arguments so I’m happy to accept either definition of VM, just as long as both sides agree on what it is that they’re discussing :)
It's less virtual than usual; it has full access to and control over the embedding process. This is an RM, a Real Machine running in the original access space.
And since you already have graphical output, because it is a GIF displayed in iMessage, and you have access to gestures, since you exploited OS and can get access to any input, you should be able to have fully playable DooM in iMessage! You can even share that game with friends (who run unpatched iOS)!
I think that allowing overflows to go unnoticed is a mistake. Overflow on addition should cause an exception by default. It should be easy to implement in hardware and as it is UB in C, correctly written programs wouldn't break.
For example, imagine if you are counting money and because of the overlow millions turn into several cents.
Another evil thing is indirect jumps. They should be implemented using an index into a jump table.
Integer overflow is not UB in C, only signed integer overflow. Unsigned integer overflow is defined to be modulo 2^w. And there are plans in C2x or C23 IIRC to make signed integer overflow well-defined in terms of two's complement too.
integer overflow is UB because on some architectures it was trapping.
In practice for the last 30+ years the default behaviour has been non-trapping. So much so that making it trapping would break vast amounts of software that depend on it, so you can't change the general case behaviour in C, C++, etc, or "safe" languages like Java, C#, etc.
Newer languages do recognize this and make trapping the default behaviour, but "rewrite everything at once" is simply not a tractable problem.
> In practice for the last 30+ years the default behaviour has been non-trapping. So much so that making it trapping would break vast amounts of software that depend on it, so you can't change the general case behaviour in C, C++, etc
You can change it in C and C++, since the current behaviour is undefined i.e. give control of your computer to hackers.
GCC and Clang should make -ftrapv the default. They won't, because whichever one does it first will then perform worse on benchmarks than the other, and that's the only thing the devs care about. But they should.
You can change it to be trapping behavior, but doing so is problematic for architectures that cannot detect overflow at all of the supported widths in hardware because the software checks are slow.
Unfortunately (IMO), C and C++ has a sizable community that is unwilling to accept pessimizing behavior for various atypical architectures. This is not unreasonable, but it hugely limits the ability of the language to make decisions that work great for the 99%ile case.
Because too much code is completely broken if you do.
The only things that make use of overflow being UB are optimizing compilers, and they have reliably broken code for because of this for 20 years. This means most developers have realized that pretending non-twos-complement architectures still exist is nonsense, and both C and C++ have significant pressure to actually define overflow as being 2c.
No. It isn't broken. There is a huge amount of code that assumes this behavior, and works correctly, because that is the way hardware works. That is a huge amount of code that has worked reliably for decades, because it is correct, because the expected behavior matches the hardware behavior.
Saying "it is UB therefore is a security bug" is nonsense.
Saying it shouldn't be UB is useful, and is being addressed by the C and C++ standardization committees, and that work will not change the behavior, it will simply remove the "it's UB" nonsense that optimizers occasionally use. At that point the defined behavior will be a twos complement overflow.
Saying it's UB and therefore can be arbitrarily broken is equally nonsense, breaking code that is correct, within the confines of actual real machines, for no reason other than "it's UB" is not anymore helpful than saying "why don't you just rewrite it all in X".
It's actually incredibly difficult given your definition of what is allowed, to write anything in C that is not UB.
> There is a huge amount of code that assumes this behavior, and works correctly, because that is the way hardware works.
It doesn't though, because gcc et al don't care how the hardware work, they can and do happily miscompile that kind of code into security vulnerabilities instead. If you're talking about embedded code that's compiled with a specific vendor's non-optimizing compiler then yes (but changing GCC and Clang's defaults will have no effect on that kind of code), but if you're talking about code for mainstream desktop/server systems then no, it already doesn't and can't rely on wrapping overflow.
> Saying it's UB and therefore can be arbitrarily broken is equally nonsense, breaking code that is correct, within the confines of actual real machines, for no reason other than "it's UB" is not anymore helpful than saying "why don't you just rewrite it all in X".
But it's not correct, not just in theory but in practice. In real life, code that does this and gets compiled with a modern optimizing compiler like GCC or Clang is already an RCE unless proven otherwise. Yes wrapping is what a naive assembly translation would do. But the compilers don't do naive assembly translation and haven't for decades.
> It's actually incredibly difficult given your definition of what is allowed, to write anything in C that is not UB.
Yes, which is why we keep getting security vulnerabilities like this one.
The problem with nation states is that they don't pay people. I don't think nation state can ever come up with something like this - it takes passion, genius and those qualities demand higher premiums than governments are ever willing to hand out.
It's still pretty expensive! NSO charged a flat $500,000 fee for installing Pegasus. It charged government agencies $650,000 to spy on 10 iPhones; $650,000 for 10 Android users; $500,000 for five BlackBerry users; or $300,000 for five Symbian users.
Feels weird that a private company can target individuals for a price. How was this legal? Isn’t it illegal to hack the phone of a private individual? Or do they simply say here’s the tool, here’s the manual, do what you want just don’t tell us?
Phone companies charge governments to tap a phone line this isn’t that different with the only exception that phone companies usually have to only follow requests made within the country they operate in and those are usually backed by a warrant.
Like with any complicated tech things are a bit more involved since their exports are controlled under the same regime as weapon exports do in Israel they likely have some oversight to ensure that their tech does not leak out and isn’t used outside of the bounds of what was agreed on at levels that go beyond NSO as a company itself.
These exports were very much part of the Israeli and quite likely the US foreign policy.
Some deals like the one with KSA probably should never been greenlit but many others unfortunately have had the outrage steered away from the main culprits.
Amongst their exports they’ve also exported it to European nations such as Poland.
Poland an EU and NATO member had used this software to have one of its government agencies spy on a prosecutor in charge of an investigation into some of the leading party’s members, however it didn’t seem to generate as much outrage and what little it had was directed as the NSO or Israel which is laughable.
Poland isn’t a state that normally could fall under any arms embargoes or export restrictions.
This software had likely very little to do with Khashoggi‘s fate, they didn’t use it to lure him into a trap or to track him for an assassination he was killed in an embassy after being invited to come in, and he came in out of his own free will.
I’m far more interested in how some of their western clients have used this software and unfortunately so far no one seems to want to pick or steer the story that way.
NSO have been trying to argue that they're shielded from responsibility for their actions by being a de facto extension of the state that they've sold to and therefore enjoy sovereign immunity.
The collapse of that argument in the Facebook case is why Apple are now suing as well.
> Feels weird that a private company can target individuals for a price. How was this legal? Isn’t it illegal to hack the phone of a private individual? Or do they simply say here’s the tool, here’s the manual, do what you want just don’t tell us?
It's only illegal if you get caught. And then find someone to prosecute you.
For the capability and who they're charging, frankly, that sounds cheap. For an example of what a nation state is willing to spend to target a single individual, a hellfire missile costs about $150k and doesn't generate intel in the process.
A lot of people talking about governments as NSO's top clients... How do the governments actually pay them? Out of their pockets? State budget? In any ridiculous case, shouldn't this kind of payment be easy to track? Why is no one talking about this? Why isn't this forbidden on international level? The US (as world peacemaker) seem pretty chill about it. I thought this was 'merica!
The FBI couldn’t mainly because they haven’t caught up to this level as it’s not their primary objective they are first and foremost and investigative outfit.
The NSA not to mention the entirety of the US defense industry could’ve easily found a way to break the encryption on a single device especially since they only had to break a relatively simple password / passcode it’s just a question of how much would it cost and how long would it take.
First of all, that's kind of a sticky subject in the US due to the Fifth Amendment. Second of all, I have no idea what the parent commenter is talking about. Rittenhouse willingly turned his phone over to investigators.
This is quite clever, but fundamentally it's only possible because of a buffer overflow. If the JBIG decoder had been written in Rust (just to cite one example of a language safer than C), this would have been impossible. Use dumb languages, pwn valuable prizes.
If the code can't be deleted then an alternative to rewriting is to sandbox it like Firefox recently started to do with wasm. That would have kept any exploit in the sandbox - let them have fun in there with that 70,000 step program where it can't touch anything...
Sandboxing using wasm has around 10% overhead, so a full rewrite might end up running faster. But recompiling the code takes less time and effort and will not introduce new bugs, so it's a useful option too.
Even if this was properly written C++ this wouldn't have worked, i.e. if the authors had used a std::vector instead of a hodgepodge of new and malloc. Even if you had overflowed the integer and used it to reserve some capacity in a vector, calling `push_back` would have reallocated the vector instead.
Containers and move/by-value semantics in C++ inherently avoid a lot of stuff, it's sad seeing so many C++ developers do not actually know the language and instead stick with a "C with classes" that doesn't provide any extra security compared with C.
As much as I'd like to agree with you, the real reason this is happening is because humans wrote the code and humans make mistakes. And even if you rewrite all of the software in Rust, it'll still have exploitable bugs.
Does it matter if it's a buffer overflow or a rusty pan, if the end result is someone reading your device's memory?
You're assuming that no additional attack vector is being introduced due to features unique to Rust. In my opinion, unknown issues are worse than known issues.
Since the first Fortran compiler one of the basic tenets of computer programming has been that the computer itself should help the human programmer express his/her ideas and help the human avoid basic accounting mistakes.
C doesn't do the latter. Blaming the human programmer for stupid accounting mistakes is misplaced. The human's mistake was in choosing a bad language.
“JBIG2 doesn't have scripting capabilities, but when combined with a vulnerability, it does have the ability to emulate circuits of arbitrary logic gates operating on arbitrary memory. So why not just use that to build your own computer architecture and script that!? That's exactly what this exploit does. Using over 70,000 segment commands defining logical bit operations, they define a small computer architecture with features such as registers and a full 64-bit adder and comparator which they use to search memory and perform arithmetic operations. It's not as fast as Javascript, but it's fundamentally computationally equivalent.
The bootstrapping operations for the sandbox escape exploit are written to run on this logic circuit and the whole thing runs in this weird, emulated environment created out of a single decompression pass through a JBIG2 stream. It's pretty incredible, and at the same time, pretty terrifying.”
This one will be another talking point right beside the "arbitrary code execution in SNES games via controller inputs" as a rebuke to arguments about even small systems (like an image decompressor) being "made secure".
I also keep thinking "The Cylons would totally write an exploit like this."
They must have spent tons of engineering effort to create this virtual computer to act as their foundation for further exploits. They don't deserve any sympathy of course, but it must really suck that their foundation disappears immediately with the fixed vulnerability.
I actually think you can write and test a NAND gate solution in any external environment that is much easier for development and testing (say plain C on Linux),and then transpile it into any other turing complete arch. Actually this could be a really interesting project to work on: 1) Write a VM in C and 2) Write a transpile program for another turing complete arch.
You think? These devs are some of the devs in Israel. The best get too popular to work in secret labs like NSO. I find it hard to believe that the best devs are secret ones in Israel. But obviously, I could be wrong.
I'd imagine there is a fair share of amazing developers that have no interest in flaunting or broadcasting their talent, for at some level it just isn't necessary to advance their career.
It does not give me imposter syndrome. It just tells me I know absolutely nothing despite having a so-called advanced degree from "a top school". God damn it.
If this is the level of sophistication just going after an iPhone by a private company, it’s laughable to think our critical infrastructure isn’t able to be completely turned off on demand.
An invisible nuke that destroys access to clean water overnight across the nation, that’s the future!
No, it just means that they've found vulnerabilities that can be triggered without user interaction. This is entirely doable by just fuzzing or reverse engineering the released iOS binaries.
Of course – but you can definitely fuzz your way to the initial vulnerability. The VM stuff is done once you have that vulnerability and are writing the actual exploit, which is a manual process.
Entirely do-able by a team of experts with multimillion dollar budgets over the course of probably many months, doesn't sound at all similar to average hn commenter being able to do it before lunch.
Source code doesn't help that much, and sometimes the assembly makes some bugs more obvious. They really don't need the source. They just decompile it.
People without reverse engineering experience often think there's a massive difference between white-box and black-box auditing, but there really isn't. Yes, it takes longer, but not ridiculously so.
NSO aren't interested in being an overtly criminal operation; breaking into Apple and stealing source would be a giant liability they don't need to have. Their game is feigning ignorance as to what their customers do with their software. They can't afford to be caught commiting crimes directly.
I mean, it would just be a prudent business move once the first PoC comes out right? You know that Apple is going to patch it eventually. It totally makes sense to try to pop a dev box and exfiltrate the source code. The only question is if they can make it past Apple's network security - it's unlikely that devs are allowed to take their work MacBooks with iOS source code home.
> Recently, however, it has been documented that NSO is offering their clients zero-click exploitation technology, where even very technically savvy targets who might not click a phishing link are completely unaware they are being targeted. In the zero-click scenario no user interaction is required. Meaning, the attacker doesn't need to send phishing messages; the exploit just works silently in the background. Short of not using a device, there is no way to prevent exploitation by a zero-click exploit; it's a weapon against which there is no defense.
Having iMessage disabled and no SIM card in the phone (use an external wifi vpn router with a sim) is a mitigation, and is one that I use.
They are discussing a descriptive class of attacks and saying there is no defense against them. Clearly there are defenses; it's a bit sensationalistic.
I'm surprised that iMessage doesn't restrict content to a subset of predefined, preapproved, battle hardened types and variants. Any other content has to be converted before transport. Otherwise that content is just treated as payload (attachment).
Why use a library to support every format ever made? To inline image previews? Just show a thumbnail.
Why do anything with PDF? Just show a thumbnail.
I'm pretty sure iMessage already down samples the puppy pictures I send to my mom, and doing some kind of conversion to tunnel across MMS for non iMessage recipients (eg Android phones).
Yeah, maybe next time don't "cancel" wikileaks just because they leaked something from your political side(referencing the DNC & Democratic leaks here).
The vault7 was mind-blowing and somehow reinforced what was de facto knew but until then no solid proof was there[merely that quite everything is compromised directly, and when that's not the case, your output is mined],available to the large public.Again, if the Media was more keen on that issue (Privacy, 3-letter agencies being abusive,etc) then people would remember the issues that exist in most computing systems today.
For people who at least skimmed through those vulnerabilities,tools, and leaked documents, it's not surprising at all that these kind of exploits are still out there.It's actually disappointing and frankly pathetic that people don't pay attention to some things because they disagree with the source of information(By the way WL revealed things from the entire spectrum: right/left, Russia/western countries/Asia/etc).These 'NSO leaks' are not eye-opening exploits/techniques, but don't let me spark the bubble of "modern infosec journalism".
You have to hand it to NSO on this one. They might be evil but they sure aren't stupid.
They (loosely) created a virtual machine out of a (very) abstract vuln in the JBIG2 library to create their own logical Turing machine / computer that they then built their own simple architecture for...and then could allow for the computation of any arbitrary function within arbitrary memory. Brilliant; bravo. I salute the ingenuity of this one.
> Most of the CoreGraphics PDF decoder appears to be Apple proprietary code, but the JBIG2 implementation is from Xpdf, the source code for which is freely available.
When I worked at Apple, we encountered a Jailbreak exploit that relied on some poor code in one of the open-source type fonts. It appeared that someone committed a mod to the project that neglected to do an array bounds check, waited ONE YEAR!!! and then exploited the vulnerability to create a PDF document that would jailbreak a phone if the PDF was viewed.
I guess it is possible that something similar was done here.
>The bootstrapping operations for the sandbox escape exploit are written to run on this logic circuit and the whole thing runs in this weird, emulated environment created out of a single decompression pass through a JBIG2 stream. It's pretty incredible, and at the same time, pretty terrifying.
I bet even the Project Zero authors were profoundly amazed how NSO hackers managed to do that. Maybe they can hire them when NSO goes bust.
"And just because the source filename has to end in .gif, that doesn't mean it's really a GIF file. The ImageIO library, as detailed in a previous Project Zero blogpost, is used to guess the correct format of the source file and parse it, completely ignoring the file extension. Using this "fake gif" trick, over 20 image codecs are suddenly part of the iMessage zero-click attack surface, including some very obscure and complex formats, remotely exposing probably hundreds of thousands of lines of code."
90% of zero-days could be stopped at the front door by file format checkers doing static analysis on popular file formats (like ZIP, MS-CFB, Office, image, PDF) exchanged via message/email to reduce semantic gaps in the file format and weird-machine ambiguities that will inevitably be abused by attackers.
The problem is simple. Most codecs (and even vulnerable anti-virus software) were written in a time when Postel's law was the norm. It's much easier to write a file format checker as defense-in-depth for the vulnerable codec than to rewrite the codec, especially since a codec's interface typically can't expose policy decisions like a checker's interface can.
As a community, we could solve this problem tomorrow.
However, the file format checkers to defend don't exist, because the major email/message providers don't want to invest in them, because nobody would see this work. Even if we build them, they don't want to invest in tuning the false positive rate, because people want lax security not strict security on all message/email attachments they open.
> This is not an official Google product, it is just code that happens to be owned by Google.
Which generally means that the original author worked at Google when they wrote this code & "chose" to let Google own the copyright of this project rather than fill out more paperwork.
It doesn't imply the project was funded by Google--just that the author was a Google employee at some point.
That disclaimer appears at the bottom of every project that Google doesn't feel like officially supporting. Even the tcmalloc project has that disclaimer at the bottom and it's used in every single process running in Google datacenters right now. I think you'd be hard pressed to really make the case that WUFFS is not funded by Google.
> That disclaimer appears at the bottom of every project that Google doesn't feel like officially supporting.
Based on my understanding from conversations with Google staff, it's a disclaimer that's required if anyone at Google wants to write "personal" code & release it (under an Open Source license) while employed by Google.
(Unless they choose to undertake a much more paperwork-intensive alternate process with different requirements & implications.)
Personal code is different to code developed for reasons that were directed by Google (which they have no problem saying is unsupported, e.g. any version of Android older than 3 minutes or something :D )--which is why this disclaimer can be seen on such odd non-corporate things as retro-computing related projects.
Of course, sometimes such projects do become something Google wants to use (maybe WUFFS falls into this category)--so I expect we'll see more retro computing support on Stadia any day now... :D
I have this "what happens next?!" feeling about this post. So exciting. It's also exciting to know that you can send a pdf as any filename through iMessage and it will display! :)
Don’t think of these folks as “google” employees. Think of them as “really good hackers with corporate sponsorship”. They look for flaws in everything - windows, apple, Linux, and google software. You should read some earlier blog posts, they’re really high quality.
A large percentage of the planet has personal sensitive data stored by Google. If that data leaks, even due to a bug in another company's product through which Google has no fault, Google suffers. Google greatly benefits by having a secure Internet.
On this note, has google ever had a breach? I actually can't think of one off the top of my head, which is impressive for a company like google with so much data and such a large footprint
They've been completely breached by Chinese agencies in the past, and IIRC the revelations in the Snowden leaks prompted them to redo their entire internal networking layout because of concerns about state-level spying.
On the Android front they keep tightening up access (removing more power from root, more use of SELinux and other controls) because of breaches in one form or another.
They are very often on the receiving end of state level shenanigans. Finding bugs in software they use, helps them stay secure. Not to mention the goodwill earned.
Windows/macOS/Linux aren't the operating system any more, the browser is.
And the browser's job is to be constantly online the whole time and download and execute JavaScript that gets dynamically optimized for your CPU architecture using one of the fastest runtime compilers ever made (aNd WhiCh MiGhT HaVe BuGs iN iT), and then your CPU directly, blindly executes the result, with as little bounds-checking as the runtime compiler thinks it can get away with so it runs as fast as possible.
Zooming out somewhat, the new OS paradigm is the continuous download and execution of absolutely arbitrary code, all day, every day, from sources including hacked ad servers, successful social engineering campaigns and your blog.
And Chrome has like ~70% market share.
Because public company and "legally bound to create value for shareholders" and all that, it is very much in Google's interest that they maintain that market share because that lets them serve more ads.
So that's ultimately the reason. Google wants the world's most secure platform so they can guarantee their ads business.
In this case, this was already fixed by Apple's engineers. And like the article says, Citizen Lab (people who captured the exploit in the wild) and Apple have shared the exploit with Project Zero who analyzed it as well and wrote up that blog post.
Project Zero people have found numerous bugs in Apple's software in the past. They look at all kinds of software that's written by all vendors.
The BlastDoor sandbox was added since then (which project zero also has a post on), but this exploit happened outside the blastdoor sandbox, so that doesn't change anything.
If previous iMessage exploits are anything to go on, it's likely this gives unsandboxed root access to all memory on the device.
Could it be that authors of this post are trying to portray NSO as more powerful than they actually are? Could there be elements in googleprojectzero that work for NSO?
It is a well known method to make a spy operation more elaborate to worry adversaries.
You're right, only hides it from home screen. Good thing is UE recently passed some legislation regarding this, so it might be there in the future at least for European users! yay?
Amazing apple let this slip past. Seems pretty obvious why this is bad design, easy to exploit, etc. so maybe it was intentional and already being used by us when the NSO group caught wind through “back channels” and hopped on the gravy train.
"iMessage's .gif handling was a bit sloppy" is a believable problem; the idea that it was done deliberately to facilitate access to what amounts to a VM running in an old image compression format is a big stretch.
This isn't like goto fail, and even that one could be explained by developer oversight.
I mean the actual exploit is in decades old xpdf code, so that seems unlikely? The gif thing is simply the way that they get to the xpdf JBIG2 decoder.
Noticed a flaw in my phone and other people's phones where the default browser was not honored (on Android) and SMS links open in `Samsung Internet` which barely gets updates and is a serious vector for attack.
On top of this, why should a link containing a malicious payload be able to speak to other parts of the system? Doesn't Android do a basic security measure called sandboxing and `principle of least privilege'[0]?
I am highly suspicious of every URL in my SMS messages app now thanks to these NSO revelations. I'm not especially interesting, so I doubt I had NSO-grade malware on my phone, but we need to protect the masses, not just those with a high profile threat model (Journalists, Dissidents, Activists, etc).
It's all configurable on a per URL level on Android, it's just hidden deep into settings - it's not so much that it wasn't honored, it's likely someone some time set Samsung Internet top open SMS links - you can go in the app settings/permissions/app defaults to try and reset it or set it to another app.
Go easy on me, I'm new here. I plan to comment a lot more as time goes by. My comment is purely anecdotal. I'm not saying `everyone now has malware`, just stating that classes of attacks can be killed by doing basic security like principle of least privilege & sandboxing (Android and Apple probably already do it, but then how are these attacks possible?)
I think the part of the article that touches on this is:
"(...) iMessage calls the following method in the IMTranscoderAgent process (outside the "BlastDoor" sandbox), (...)"
Looks like they have been decoding GIFs outside of the sandbox, which has been addressed later:
"Apple inform us that they have restricted the available ImageIO formats reachable from IMTranscoderAgent starting in iOS 14.8.1 (26 October 2021), and completely removed the GIF code path from IMTranscoderAgent starting in iOS 15.0 (20 September 2021), with GIF decoding taking place entirely within BlastDoor."
There has been something called a Pegasus framework on my iphones since the 5s and now in my xr. I have seen other people question the same thing on apple dev site but just as i never got a response from apple about what it actually is, neither have they.
There is also a Pegasus Arm64 too.
> We want to thank Citizen Lab for sharing a sample of the FORCEDENTRY exploit with us, and Apple’s Security Engineering and Architecture (SEAR) group for collaborating with us on the technical analysis.
This reminded me that NSO went after Citizen Lab on multiple fronts. They even tried to use a spy to talk to JSR (https://www.johnscottrailton.com) and make him say controversial things, which could be later used to malign Citizen Lab. Darknet Diaries covered this incident recently: https://darknetdiaries.com/episode/99/