The reason this is interesting is not so much that some companies have apparently provided backdoors so that governments can spy on their users. We already knew (or could reasonably surmise) that that was happening.
No, the fun part is that one government's spies (India's) have apparently used these backdoors to spy on the agents of another country (the US). So, you see, by mandating these sorts of security compromises, governments have in fact made it easier for foreign governments to steal their own secrets—an irony apparently lost on the author of the Indian memo and on the authors of these policies.
Exactly. Any backdoor comes with the possibility of use by other parties - hackers, foreign governments, etc. Which is why they are always unacceptable, even if you trust the US government (or whoever is asking for it), including everyone who does or ever will work for them (which is an absurd risk).
The lesson is, don't trust commercial security products. Only trust open source peer reviewed products like TrueCrypt, GPG, etc.
If you want your data to be secure, use your own layer of encryption, preferably on your own physical disk or at the very least have the encryption executing in an environment you control. e.g. Not someone else's package of TrueCrypt running on their server, rather your verified package on your machine and then upload it.
Encryption is a major strategic issue and few governments will lay down and accept that their citizens or other countries have the ability securely store and transmit data. I was at an NSA show and tell recently and they had an original enigma machine on display which speaks volumes.
The article is about OS-level backdoors, not software-level backdoors. The aforementioned software wouldn't do anything against keyloggers on the phone.
If you're scared of your government, you shouldn't be using any form of communication without end-to-end encryption that you control fully. This much should have been obvious for decades now.
Criminals and revolutionaries are surely just as aware of these simple facts as we are, so these systems will likely mainly be used to spy on the innocent for political or personal gain.
If you make secure crypto a crime, only criminals will have secure crypto.
If things are really that bad, that in itself would raise red flags all over the place.
related aside: I always wondered at people who push TOR for citizens of authoritarian regimes - please pin a "subject of interest" on me, pretty please. imho, it is not "content" that upsets these regimes, it is the desire to question the system that is the issue.
You missed his point. If you have "any form of communication with end-to-end encryption that you control fully", that precludes "an OS-level keylogger".
I think we're using different definitions of "control fully." In my view, in order to control the encryption process fully, you must control the device doing the encryption. If you're using an OS with a keylogger, you DON'T have "end-to-end encryption that you control fully" because you don't control one of the ends.
If you fully controlled both ends, the security of the wire would be irrelevant. It could send copies of every message you send straight to your enemy's inbox and it would still be secure, because only the two fully controlled ends could decrypt the messages they send one another.
If you don't control the device, you don't control the encryption.
"CALEA's purpose is to enhance the ability of law enforcement and intelligence agencies to conduct electronic surveillance by requiring that telecommunications carriers and manufacturers of telecommunications equipment modify and design their equipment, facilities, and services to ensure that they have built-in surveillance capabilities, allowing federal agencies to monitor all telephone, broadband internet, and VoIP traffic in real-time."
So there's clearly a problem here, and the only solution seems to be to use a 100% open-source device along with 100% open-source software.
I mean there's pretty compelling evidence that others now have access to what we're doing on our (closed-source) devices. Would it now be totally irrational to go the Richard Stallman route?
Given the US Gov (and I'm fairly certain most other govs as well) has wiretapping laws in place (http://en.wikipedia.org/wiki/Communications_Assistance_for_L...), I don't see why anyone would be surprised that the hardware providers also provide a mechanism to allow for tapping.
I would assume that the backdoor for Android devices is the Android Market itself.
Because Android is open source and each device manufacturer has its own build, it isn't really plausible to have a single backdoor within the core operating system. However, the closed-source Android Market application is the perfect candidate for such a mechanism; when you install an app via the market, all of the following happens:
1. You request the application from the Market and agree to any permissions requested.
2. The Market pushes an install instruction to the phone via Google's XMPP-based Android notification service.
3. The phone downloads and installs the .apk as instructed.
Step 3 requires no interaction from the user – this is how, for instance, you can send an app to your phone from the Market web site. But it also means that any government with sufficient leverage over Google can coerce them to use this mechanism to install an arbitrary SpyOnStuff.apk at any time.
But if this is the case it isn't entirely bad news for Android users, rather it yields two means of protecting yourself which aren't necessarily options on other platforms:
1. Use an open source Android build such as CyanogenMod, without the closed-source Market app installed.
2. Proxy Google's XMPP notification service and require user confirmation before pushed APKs are installed. I don't know of any Android mods that currently implement this capability, but it would be nice to see in future versions of Cyanogen or similar.
> I would assume that the backdoor for Android devices is the Android Market itself.
Not necessarily. The baseband firmware on your phone is probably more capable than you think. I played around with hacking my old HTC WinMo phone and with a little work I got the (Qualcomm) firmware to respond over USB. There's a complete file system on there that's nearly identical to the file system structure on my wife's dumb-phone (and accessed the same way). I assume they'd put everything they need at that level and don't have to insert anything into Android itself.
Yes, but even on devices where the baseband has full access to main system address space, the baseband device is something that varies from one Android phone to the next; on the other hand, the Android Market is a constant across almost all Android devices.
So which would you rather develop: a different backdoor mechanism for every Android baseband out there, or a single mechanism for all phones equipped with the Android Market?
This code probably doesn't vary that much. Across all the qualcomm phones I looked at (Smart and feature phone), it's clearly the same basic operating system. So this one underlying OS that's almost identical across all phones smart or feature that use the same chipset (in my case, they are all CDMA phones). So what's easier to develop, a backdoor mechanism that works for every phone (feature or smart) in exactly the same way using the same code or a different mechanism for each phone OS?
You don't develop either. All you do is tell whoever wants to sell the phone that they must include a backdoor. That means the backdoor almost certainly gets implemented in vendor-specific code (whether their firmware, their add-ons, or their branch of the core OS).
Actually, given that the overwhelming majority of people _won't_ be running custom Android builds, it's also quite possible that Indian carriers are simply required to install whatever spyware the government requires, preferably without the knowledge of either manufacturers or Google. We're talking about a well-funded intelligence agency here, and you don't keep secrets by inviting unnecessary parties into the conspiracy, no matter how much you trust them or what "leverage" you have over them. Also, working with a global player like Google means Indian intelligence funds have effectively subsidized the development of tools that will in all likelihood be shopped to foreign spooks that spy on India. After all, it's not like they're going to get an exclusivity arrangement out of Google for this!
The market doesn't actually perform installations, though. Install is handed off to the package manager and you would need a backdoor there. But the package manager is part of AOSP.
Although, I have wondered if Google might have had more secret weapons to take out droiddream if necessary other than asking it nicely to delete itself. If not, boy were they lucky. I mean, it is pretty balsy to make an OS that for all appearances just assumes /system can't be compromised. It's a double edged sword of course any API a virus scanner might use is also a vector for attack. I guess possibly they're relying on being able to push a targeted OTA update? That's the best contingency I could come up with when pondering "what if droiddream wasn't retarded". But there are also ways to prevent OTA updates from being applied, so that's probably a one-time trick before the next malware blocks all OTAs (methods to block OTAs from auto-installing are all over places like XDA/rootzwiki).
> Because Android is open source and each device manufacturer has its own build, it isn't really plausible to have a single backdoor within the core operating system
I doubt they would be asking Google to put in a backdoor. They would be asking the individual device vendors to do so. After all, it is the device vendor, not Google, who has to get the particular phone approved for sale in India.
My first impression was that the memo is fake.
The author refers to people (such as Shantanu Ghosh) without titles; and from what I recall of Indian bureaucracy, titles are important and you just don't drop them. They would always say "Hon. XYZ" or "Shri ABC".
Plus, it looks like the contents of the intercepts listed were there for gratuitous purposes only.
Is there any verified info showing a backdoor in Apple iOS?
From my brief reading it sounds like the person writing the memo may not be able to distinguish between surveillance of the carriers and putting backdoors into the devices themselves. I can believe that the carriers would be compromised, sooner than I'll believe that the operating systems are.
Specifically, one of the things Apple did with iOS 5 for iCloud was put in end-to-end encryption into the system. iOS has for an even longer period contained protections to make it less easy for an app to read data belonging to other apps, and the storage on Flash has been encrypted since at least iOS 4 (I believe.)
Yet the iOS 5 binaries are subject to the scrutiny of the jail breaking community, and if there is a backdoor here, I'd think that it might have been found. (Or will be soon now that its existence has apparently been leaked.)
Further, how could this actually work? (the memo is so full of jargon and nonsensical to me that I stopped trying to interpret it pretty quickly.)
Its tricky to get intentional communication- FaceTime, iMessage, etc, working on devices, let alone back doors for a specific government. I don't think the indian government could successfully delver to Apple a spec for them to implement that wouldn't intrinsically cause noticeable problems and thus reveal its existence regularly.
In short, I suspect that a backdoor is kinda infeasible from a software reliability viewpoint.
Finally, its also quite possible that this is a psyops campaign. By announcing that they have a back door (thur a leaked memo) the indian government could be attempting to bring pressure to bear from other governments to get their own backdoors, and thus, enable india to ultimately actually get a backdoor. I remember months ago reading that the indian government wanted more access to the mobile devices but were being stymied by manufacturers.
I've been observing Apple fairly closely for over 30 years and working with their operating systems, often at a fairly low level for almost 20 years. I've never seen Apple do anything that would betray their customers trust. People may object to the decisions Apple makes, but I have never seen a decision that, at the end of the day, didn't have at least the intent to do right by the customer (even when they failed to achieve that intent.)
Thus I'm highly suspicious at any claim that Apple has put in a backdoor. I think this kind of claim is extraordinary and out or character, and Apple has far more integrity in my eyes than these documents.
So, I'd like to see some evidence before we accept that this is fact.
Nope, you can easily get access to your files via USB. As for storage on iCloud, any keys would need to be recoverable via your Apple ID password which would likely make it the weak point of the scheme.
Any telecommunications device in the US is subject to CALEA so expect backdoors.
Actually from IOS4 and on, you can't get files via USB directly without entering the passcode. The filesystem itself is encrypted (AES256) and unlocked once the passcode is entered on the device.
However I believe without a passcode you can still just use USB to access everything.
Actually CALEA mandates two things: 1. the ability to intercept, 2. that intercepts not be detectable by end users. You can implement the functionality anywhere in your stack that you'd like, including the consumer device.
True, but putting it end the end user equipment is a bad idea (and probably opens you up to liability) because it can be detected and mitigated from that side. You have to assume that whatever code you're running on a device not under your direct control will be observed and poked and prodded.
The jailbreak community is not that infallible. Not like they have the source to analyze or something (and "bugs" slip by even when the community has the source)
end to end encryption means nothing if the other end has to hand over your data anyway.
This article is precisely why I have a Linux box on my desk here and support android only. WHEN (not if) When the government decides to blacklist and whitelist ideas, actions and clicks, I can recompile the OS and comment out those lines.
Are there layman-friendly tutorials on these OS-level steps to ensure full control of encryption? If it remains a matter of editing and recompiling an operating system, the masses have no hope of reestablishing that control.
I do believe the US and UK governments have access to cell phone traffic (Echelon), so part of this is no surprise.
But what I don't necessarily believe is that the government of India is in on it. Or that they are intercepting memos written by the United States about some confidential stuff. I feel this whole thing is a hoax and not real.
Has this been confirmed by anyone? I wouldn't necessarily trust some "hacked memo" posted to Pastebin. Also, the title bothers me. Memos are leaked, not hacked.
No, the fun part is that one government's spies (India's) have apparently used these backdoors to spy on the agents of another country (the US). So, you see, by mandating these sorts of security compromises, governments have in fact made it easier for foreign governments to steal their own secrets—an irony apparently lost on the author of the Indian memo and on the authors of these policies.