Amazing. Did they need to jailbreak or physically open the phone to find all this stuff? They talk about reversing binary images and using their "Legilimency" toolkit; I wonder if a vanilla phone was enough to research all this and propagate through Wi-Fi.
I'm guessing there must be other jailbreaks involved to be able to observe and experiment on the ios kernel side of things while developing the wifi chip exploit; going in all blind from the wifi side only sounds impossible. The question now is, are they sitting on 0day jailbreaks for current iOS versions or did they have to do all the tests on legacy iOS versions?
It looks like that setup work for their research environment was all covered in part 1 (all the parts are really interesting and worth a read if anyone hasn't already incidentally). Specifically, the reason they mention at the end of part 3 that
>The exploit has been tested against the iPhone 7 running iOS 10.2 (14C92).
was because iOS 10.2 has a known kernel exploit developed by Ian Beer [1], and they used that as part of the basis of subsequent research. Presumably they either found some iPhones still running 10.2 (which stopped being signed a long while back) or like many well funded researches just keep a set of different iPhones loaded with major iOS versions so they're ready to go for research if an exploit is found after signing stops (dedicated jailbreakers sometimes to the same thing if they can). And of course security patches themselves are handy for reverse engineering old exploits from whatever bugs Apple fixes.
In part one read under "Kernel Memory Analysis Framework".
Yup, it still does. In the face of a lucky-guessing attacker, PAKE basically degenerates down into plain unauthenticated Diffie-Hellman, which means Alice-Mallory has one key, and Mallory-Bob has a different key. Mallory could decrypt the messages from Alice and then reencrypt them for Bob, but Alice and Bob will still both see different keys. If they have some out-of-band means to compare those keys (which Mallory can't corrupt), then they can detect the attack.
Some time ago, I implemented a little tool for myself which backups folders incrementally on usenet (deterministic message-id creation from a secret key, append-only style with metadata, parity, encryption etc. so you only have to remember one unique key to access all of your data, even if new data is added).
It can mount the current state of the usenet backup with FUSE and it's possible to browse through the files and listen to music etc.
I understand that it might not be a good idea to store all your data only on usenet, but I thought that was an interesting concept and a fun little project to work on :)
at the moment its very tailored to macOS and has plenty rough edges because its for personal use.
But I'll gladly clean it up a bit and put it on github if enough people are interested and/or i will write-up how its "protocol" works. Just let me know (twitter pm or sth.) who is interested.
EDIT: basically it's one deterministic stream of messages for journaling everything, one recursive stream of folders and linked raw files. deterministic lookup rougly like: HMAC(type|index|revision|replication, key_for_locating) and it iterates through that
Setting up wego on a raspbian were a bit painful (go in aptitude is 1.3 while wego requires 1.5), the easiest way that works out of the box is simply running curl wttr.in/Localtion ;)
Usenet is also usable for backups. Normally you would upload a huge encrypted RAR.
Some time ago i built a tool where you can store files/folders incrementally on Usenet with encryption, parity etc. I put much effort into this.
It was possible to restore a directory tree by a unique ID (which you could write on some piece of paper) that securely resolves to a chain which links to meta/raw blocks via Message-IDs and you could mount the whole thing with OSXFuse.
The ID was reusable after updates to the tree, so incremental backups worked without a new ID.
I thought this would be a nice alternative use for binaries on Usenet instead of piracy stuff. But I never released it because I think that it would lead to pollution of the Usenet network.
Is anyone interested in this? Or maybe somebody has an idea on how to use this without polluting the network? Would love to hear some thoughts about this!
I wonder if you could convince cperciva (Tarsnap) to exploit Usenet as an encrypted block store (as a cheaper alternative to S3).
> I thought this would be a nice alternative use for binaries on Usenet instead of piracy stuff. But I never released it because I think that it would lead to pollution of the Usenet network.
> Is anyone interested in this? Or maybe somebody has an idea on how to use this without polluting the network?
No way around it, it's absolutely an abuse of the network. That being said, so is the piracy on the binary subgroups. I think the end use would be small enough to not materially affect the binary NNTP hosts anyway.
Not sure about being the middleman yet. What if the big Usenet providers decide to delete data or we violate their non-business-usage policies? But on the other hand it's really cheap indeed!
We can cancel our Usenet subscription and renew it when the data is needed. So I thought it's maybe better to cut out the middleman by giving the software away.
About the abuse: I imagined that the method could be used like some unlimited-disk-space-providers which had to get rid of that plan because the users used it as advertised.
Of course, apart from that, anyone can write such software. So it's maybe just a matter of time? Though I couldn't find any other solutions besides RAR archives.
The Usenet can be used as a key-value store with handicaps. And stuff can be built on top of that.
I don't understand how you could use newsgroups for storage, except in a "I don't care about data loss" kind of way. I haven't used USENET in decades, but NNTP is a messaging protocol it says nothing about storage AFAIK. I do recall that back in the day, messages in busy newsgroups would expire rather quickly (probably due to limited storage on my news server at the time). So if I post a message containing my backup to a binary newsgroup, what assurance do I have that I will be able to get it back?
Commercial binary NNTP providers have basically infinite retention at this point, if your content doesn't generate DMCA takedown requests. E.g. http://www.news.astraweb.com/ $10/mo gets you 2660 days (>7 years) of retention, growing at roughly one day per day. (Four years ago, 3 years of retention was common.) There are also pay-by-download-in-GB a la carte plans that would be good for backup-only use (because upload is free).
Even if it isn't infinite, you can download and repost every 7 years if you care about having backups for longer than that.