Hacker News new | past | comments | ask | show | jobs | submit login
Most software already has a “golden key” backdoor: the system update (arstechnica.com)
184 points by discreditable on Feb 28, 2016 | hide | past | favorite | 61 comments



But, at some point there must be trust. If you don't trust software, you can try to sandbox it, but now you have to trust the sandbox. This devolves very rapidly. Open source at least provides a facsimile of recourse - just go read the code - but how much of your currently-running open-source code have you actually read? For that matter, if you had, could you be confident that you'd understood it? The Underhanded C Contest is a thing, after all. A sufficiently paranoid individual can only run code they wrote themselves. Or, they can choose to run code without a strong understanding of what it's doing.

If Apple subverts their updates, that's mostly interesting as a signal of their trustworthiness moving forwards. The coolest thing about this is that we know it's happening at all, I think.


> Open source at least provides a facsimile of recourse - just go read the code - but how much of your currently-running open-source code have you actually read?

Wait, what? That is not the recourse that open source provides.

The great thing about open source is that you don't need every person to read the code, just one person who can either catch or verify the absence of user-abusive material.

Moreover, even if zero people read the code today, it is preserved so that state (or corporate) abuse can be revealed later, providing another disincentive to introduce abusive material.


Alright, but now you trust that person. Which might be fine, but as an exercise in paranoia is not the greatest answer. As a social proof, somewhat better - create the opportunity for many people to examine, and at least some of them will find things and talk about it. Now the trust is in the shape of the general mass of reviewers - that it contains people who will review the code and reveal their findings.


> Alright, but now you trust that person.

In a word: no. With open source code, you could use software authored by the NSA, like SELinux, or you could even hire a manifestly untrustworthy party like Hacking Team to author some code and still be able to trust the code.

In Apple's case, there is a fairly good reason to trust Apple because it would be a hell of a kabuki theatre production to have the FBI and Apple battle in a Supreme Court case while colluding in secret. But would you trust a defense contractor? A telco? Limit or ideally eliminate the need for trust. Fortunately it is possible to reduce the need for trust below having to trust groups or individuals.


Isn't that discredited by Apple's "goto fail" bug? A critical function was mistakenly circumvented in an extremely transparent way, and yet the source code sat on their website for a long time without anybody noticing. Nobody even ran coverity on it.


goto fail was in OpenSSL which many organizations use, but your point still stands.


No, this was a bug in SecureTransport, Apple's custom TLS implementation.


Oops. You're right. Sorry.


It's not just reading the code -- static analysis tools can provide some guarantees that the software isn't exfiltrating information from your box, making unsolicited connections, or leaving unexpected ports open.

In theory, we can perform the same analysis on the compiled program's bytecode. As the decompilation ecosystem gets better, we may view machine code or bytecode as transparently as source code.

Of course, your apple EULA may bind you against decompiling the machine code -- but it can be argued that you're not 'reverse engineering', you're just doing a virus scan.


Static analysis tools are very useful for identifying accidental security defects, however they really don't guarantee the absence of a deliberate security flaw or back door. You have to assume that the attacker has access to the same static analysis tools, and can thus find tricky ways to cause false negative scan results. Or perhaps the static analysis tool itself has been compromised?


All fair points -- nothing is guaranteed, but sooner or later you have to trust your tools. Like, maybe there's a backdoor in your compiler so certain lines of code are compiled so that they notify chairman mao when you shop for red notebooks ... .

The repeatable builds projects go a long way towards preventing this by producing identical bytes from different compilation chains. Ultimately it's good to have a combination of static analysis, multiple toolchains & 'many eyes' providing checks and balances for each other.


How do you know you are running the same software that everybody else is reviewing?


One solution is reproducible builds and signatures.

The bitcoin community, for example, uses Gitian to reproducibly build bitcoind. Both Bitcoin Core and Bitcoin Classic host repositories with signed hashes for the output of those builds:

https://github.com/bitcoin/gitian.sigs https://github.com/bitcoinclassic/gitian.sigs

(As I understand it, several Altcoins do the same as well)

Anybody can follow the published guides for how to perform such a build, and compare their results with the published ones. Because the published hashes are signed, you have a reasonable degree of certainty that a variety of people are involved in the process, which also gives you a greater degree of confidence in the quality of the binary releases even if you don't want to compile it yourself (and, if you do compile it, you're free to add to the consensus that the binary build is good by PRing your own results)


Please see the Reproducible Builds project, a vital contribution to answering this question.

https://reproducible-builds.org/


Reproducible builds do not cut it. If you have a known good binary signature, you'd also have a known good source signature and wouldn't have the problem.

Or, to put it in better words, where do you get the certificate to check your build from? At extreme paranoia levels, you simply can never be sure you have the same software as everybody else, thus the only safe alternative is reviewing your copy yourself.

(How do you know the computer is showing you the correct contents of your files? Didn't think that well enough yet.)


Diverse Double Compiling is a proven solution to the On Trusting Trust problem [0]. So, if a package maintainer signs a package and posts that signature on an https page, I can have a high level of confidence that the software I compile and run on my machine is identical.

Here is some advice from Schneier on running secure software against a state-level adversary [1][2]. However, even that is not immune from a black bag job [3].

[0] http://www.dwheeler.com/trusting-trust/ [1] https://www.schneier.com/blog/archives/2013/10/air_gaps.html [2] https://www.schneier.com/blog/archives/2014/04/tails.html [3] https://en.wikipedia.org/wiki/Black_bag_operation


Surely you aren't suggesting that a reasonable answer is to read the code yourself and compare it to a known version?

Obviously, the mainstream way is a hash-based file verification.

Which again, everybody needn't do - only a small number - in order to catch a bad actor in the act.

But I presume you are trying to make some bigger point. What is that?


It's not reasonable at all. But the only correct answer is reviewing the code yourself.


In Debian, with reproducible builds.


> But, at some point there must be trust.

Do you trust the developers? Okay.

Do you trust the developers, their infrastructure, AND the supply chain? Maybe a bitter pill to swallow.

Recommended reading: https://defuse.ca/triangle-of-secure-code-delivery.htm


If you trust the hardware, that is easier than it sounds. You just have to review the core and sandbox software - then you sandbox the rest.

The reason people mostly don't bother is because they can't also trust the hardware (in fact, our software is often more trustworthy than the hardware). Thus, the point is moot.


> A sufficiently paranoid individual can only run code they wrote themselves.

I'd say that is not sufficient because even in this case you trust someone: the manufacturer of the CPU on which the code would run.

It might surprise some people but you can examine code of a piece of software to check whether it has a backdoor even if it is closed-source by reading disassembly. Surely it requires some skills and is a bit time-consuming but it's doable for an ordinary individual. Reverse engineering software is not as difficult as many think. And as a matter of fact, a large number of people are reading disassembly of widely-used software to find vulnerabilities to sell in black markets. So I think it's unlikely for Windows or iOS to have maliciously planted backdoors.

On the other hand, it's tremendously difficult to reverse engineer hardware especially CPUs for an individual without a large budget. So if I were them I'd choose CPU as a place to put a backdoor because virtually nobody reverse engineers a modern CPU and thus it'd be very unlikely to be found.

By the way, contemporary CPUs can update itself through microcode updates.


> But, at some point there must be trust.

We may want to reach a point where we trust things we use, but if we're using a security-grade definition of trust and we're honest with ourselves, I think every one of us would admit that we're using something(s) that we do not trust. There just isn't enough time to properly review, test, and verifying things.


I generally speaking trust apple hardware. If I have adversaries powerful enough that they can convince tim cook to manufacture crooked device and make sure that I will get it when I buy my phone - you can write me off as dead anyway.

I don't trust apple ios software (unlike their OSX software), because I am not in position to choose if I trust them or not. The device decides.


Trust isn't binary, and verification does not require individual verification, but community or ecosystem verification.

If 3rd parties can audit software (including analysis of binary-only software), and can observe the software's behavior, and can watch the software's network traffic, then the chance of being caught violating user trust will generally be high enough to make the liability of being caught a genuine concern.

However, if updates are automatic, encrypted, and platform DRM prevents 3rd-party audit/analysis, the chance of being caught starts to dwindle down towards zero. That entire trust ecosystem disappears, and what we're left with is absolute trust in a corruptible third party, and no mechanism with which to verify.

The latter is exactly what Apple has built. They have a backdoor: the means to push absolutely trusted software while preventing all third-party audit and analysis.


At least for some problems, you need very little trust. For example, you don't have to trust software for sorting your stuff, because checking that something is correctly sorted is very easy. There is an entire field of algorithms that produce answer that are easy to check, it's called Certifying Algorithms.

Unfortunately that doesn't really work if you have to trust your software _not_ to do tasks that you don't want done, like sending your personal stuff to a third party.


Even there, you have to verify that it doesn't have other side effects beyond sorting (like phoning home your data) and that the mechanisms that enforce any sandboxing meant to stop this also work, etc.


> can only run code they wrote themselves

They of course can't use compilers either [1] and have to write the machine code by hand. Now if you consider the CPU's microcode as general code as well which can be updated, what is he to do now?

So I agree, you need trust, and the less parties you need to trust, the less chance of getting bit.

[1]: http://c2.com/cgi/wiki?TheKenThompsonHack


Remember when software didn't need updates almost every day? It seems like we've regressed in terms of quality and the general principle of doing it right the first time. I understand that some things do change and bugs do occur, but I don't think everything needs to, nor should the rate of bugs being found be anywhere near what it is today.

Personally, I prefer stability over "new features", turn off automatic updates, and read changelists carefully. Anything which doesn't have a good description and rationale of why it needs to be changed, and how that is relevant to my usage of the software, doesn't get changed. (And software which doesn't give me the option of doing so, doesn't get used either.)


While I strongly agree with old tailoring principle of "measure twice, cut once", I'd like to point something out. If older software had less known bugs it doesn't necessarily mean it had less bugs. We see years-old security bugs (Heartbleed anyone?), or even decade-old bugs discovered constantly. Software is expanding into people's lives faster than ever. As a consequence of software being used more, both the bug-detection rate and the bug-exploit rate will inevitably increase.


>"Remember when software didn't need updates almost every day? It seems like we've regressed in terms of quality and the general principle of doing it right the first time"

Keep in mind that the new environments for any pc now is interconnected, as such a mechanism to deliver fixes quickly is actually more a solution than a problem.

For reference, take a non patched xp, connect to the internet and search for "how to get thin while eating" and your system will automatically install a antivirus for you ;)

>"I prefer stability over "new features", turn off automatic updates, and read changelists carefully. Anything which doesn't have a good description and rationale of why it needs to be changed, and how that is relevant to my usage of the software, doesn't get changed."

for that you need to become an expert on anything become patched or updated. second, it may be more about how someone else would use your system than you... as in an exploit.


> as such a mechanism to deliver fixes quickly is actually more a solution than a problem.

It's not a solution, it's an excuse for lazy testing.

Same as better hardware supported sloppyness in engineering, turning a 15MB CD/DVD burning software to 500MB+ software package (software starts with a "N").


> It's not a solution, it's an excuse for lazy testing.

While there does exist a pressure to push a product before enough testing is done, to say that everything that's wrong is all down to one "simple" fact is just trying to reduce a complex system into something simpler to understand, and then patting yourself on the back as having "solved" it.


"It's not a solution, it's an excuse for lazy testing."

How do you test absolutely everything? in all the environments? with every possible user?

On the other hand it can be also used as you mention and here I think about the software on calculators as the 12C... which probably could have a bug... that you can't fix.


I remember that time. Software quality was really bad back then. You were lucky when you hit a version that didn't crash - hence "never change a running system." It doesn't crash and does approximately what we want it to do, lets stop updating it. At least this is what I experienced in the 90s in mid-sized companies using Unix and Windows NT4.


  > It seems like we've regressed in terms of quality and the
  > general principle of doing it right the first time.
This is what Agile ideologues have brought us. "Valuing working software over documentation" does not, in my experience, tend to result in workable development, and that just doesn't lead to working software.


My experience of "agile" was that every hack-and-fix shop in the universe heard a telephone-game version of "stop doing boring stuff you never liked anyway" and stapled a carboard sign saying "WE ARE AGILE!" above the doorway.

Meanwhile, nothing changes.

The only successful agile method I've personally seen work is XP, and it requires enormous and near-universal discipline. In that respect it is no different from pre-Agile methods that worked.

Agile isn't about jettisoning good software practices. It's about making them smaller. And the signers of the Agile Manifesto weren't the first to think of it. All the ideas were in the literature, in the open, for decades before they coalesced. Even the concept of taking necessary activities and shrinking them is old -- Watts Humphreys beat everyone to it with the Personal Software Process, which is the CMMI shrunk down to the scale of a single engineer.

Disclaimer: I work for one of the most notoriously pro-agile shops of all -- Pivotal Labs. Before I came here I was skeptical. Now I'm notoriously pro-agile, with a generous dash of No True Agile Methodology.


There is no way of guaranteeing doing it right the first time. Most of the methodologies of software development take the iterative process as a measure of caution, prevention or risk avoidance similar processes.

In fact CMMi mentions the importance of measure, iterate and fix as soon as you can... which is the response to the premise of being unable to "just do-it it right"


I don't think I get where your post is going. When I hear some combination of "getting it right" and "first time" I don't presume that the development team only gets one crack at something. Instead, it just sounds like iterating internally on something until it's ready without cutting corners.


It goes to show that the poster idea

>Agile ideologues -> just doesn't lead to working software

it's a flawed idea; if you want to take agile core values all of them are focused on actually trying to reduce errors and improve quality [0].

On the other hand it's a very broad generalization, and leaves the open question "what does actually lead to working software?"

[0] http://www.agilemanifesto.org/


On a small point of order, I'm slightly worried that you might have read "idealogues" as a typo for "ideologies" - very much not the case! I very much meant idealogues.

I don't take much issue with the underlying ideas of the Agile manifesto - it's the ritualisation and thoughtlessness that it can lead to that irks me. The "anti Agile manifesto", many though its faults are, rings true in a lot of places, to my mind: "backlogs are really just to do lists" seems pretty much inarguable, for one.


  > There is no way of guaranteeing doing it right 
  > the first time.
This is especially true if (and I'm not saying this is a correct implementation of Agile principles) one actively avoids defining what constitutes doing it right up front as an article of faith.


> Remember when software didn't need updates almost every day?

I'm old enough that I actually do remember those days. Back then we had "works best with Internet Explorer" and MS Office viruses. I don't want to go back to that world, rolling updates make the world better for everyone.


I think the timeframe OP alludes predates works best with Internet Explorer and Office viruses by a decade.


Oh, so decades before semantic versioning, where an update was shrink-wrapped in a cardboard box on a shelf at the nearest Computer City and resulted in an entire night of floppy disk swapping instead of working?

Those really were the days...


Actually no, I don't remember any time when software was better written than today. Early PC's picked up viruses from floppies and crashed all the time. The Morris worm took out most of the Internet accidentally in 1988. We had no encryption and little memory protection. By today's standards, there was little or no security back then.

We didn't have to upgrade software all the time only because most computers weren't directly connected to the Internet and because the Internet was a much smaller, safer place.

As the security threats have increased, software has gotten better. Mobile OSes are better sandboxed than PC's or workstations ever were. A Chromebook is pretty secure too. But the threats are getting worse.

Perhaps someday we will get the point where there is some critical software that's proven correct and no longer needs to change. But I think it will require open source hardware that doesn't change much either, and software that's written in newer, safer languages that make it easier to prove correctness.

That's not where we are today. People buy a new phone every 2-3 years, and they just came out with a new USB standard. When we get to the point where new phones are built using essentially the same the same components as old ones and the interconnects never change, perhaps things will slow down a bit and longevity might become a thing.


The flip side of this is that software didn't get non-major-version updates unless a bug was bad enough to justify mailing out floppies with the patch. Minor bugs pretty much lived forever.


Any non trivial piece of software takes a lot of testing especially when its multi-threaded, multi-language, etc. It takes a lot of testing to catch some of these corner cases. However time to market is super important especially with current software landscape. So there are trade offs pretty much like everything in engineering. It seems quality as not important as time to market so that is what take precedence.


I might have 12 apps I've installed (android 6), and yet every day something or another is being updated. It is annoying. I think it'd be less annoying if there were delta updates.


The important difference between a FLOSS system like Debian versus the walled garden of Apple is the ability to choose which keys you trust. If I lose trust in one of the Debian signers I can remove that key. I can select a different repository to download packages from. Or I can stop accepting updates and apply patches manually.

As to a key being a single point of failure, PGP allows for multiply signed documents. Couldn't Debian require packages to be signed by at least two keys?


> Is it reasonable to describe these single points of failure as backdoors?

Yes. The case that Apple is making is, in part, that the FBI is forcing them to use a back door by putting Apple's stamp of approval on it with their signing key. Whether or not you agree that Apple must create the back door or not, they are still being asked to approve its use. Apple says this is compelled speech and violates the first amendment. Their brief has more details about it. The Tech Dirt summary has the most details [1]. Here's one notable passage,

“The government asks this Court to command Apple to write software that will neutralize safety features that Apple has built into the iPhone in response to consumer privacy concerns... This amounts to compelled speech and viewpoint discrimination in violation of the First Amendment.”

> I think many people might argue that industry-standard systems for ensuring software update authenticity do not qualify as backdoors, perhaps because their existence is not secret or hidden in any way.

It's relative. That used to be somewhat hidden. Now it's very out in the open.

> Having access to a "secure golden key" could be quite dangerous if sufficiently motivated people decide that they want access to it.

Yeah. So let's not compromise Apple's existing security procedures by forcing that out in the open.

> I expect that in the not-too-distant future, for many applications at least, attackers wishing to perform targeted malicious updates will be unable to do so without compromising a multitude of keys held by many people in many different legal jurisdictions.

I hope this day comes soon. For now, let's continue fighting for our right to privacy.

[1] https://www.techdirt.com/articles/20160225/15240333713/we-re...


>"Being free of single points of failure should be a basic requirement for any new software distribution mechanisms deployed today."

As in write error-free software?


The article claims that forcing Apple to write the software "isn't a big deal, as they could pay someone to do that". If the software malfunctioned and/or erased evidence on the device, who would be liable?


Well, the article mentions that; but the main point is update mechanisms are a single point of failure, because it's by design a way of delivering a change that will be applied an run as a root.

On the other hand it does also lacks a reference or numbers on how many times it has been exploited.


That's why updates only happen when the user accepts the update after authenticating themselves.


Exactly, that's the part i don't understand yet...

Why doesn't Apple create a mechanism which "only allows updates to be applied after the correct pin is entered"? Then, an update created by the FBI, which would disable security mechanisms, could not be installed (without knowing the correct pin).


This is why EFF urges that software makers (applies to makers of hardware too, if there is a software component) should always keep the end user in the loop of deciding whether to accept updates.


this is totally correct. I agree with everything he says. However, in a world where everyone uses:

Android

Chrome

Google DNS

Google Analytics

Google Authenticator

Google Certificates (issue and validate)

Hangouts

Gmail

Google Docs

Many machine learning is run on google code using google opensourced software.

search

chromecast

maps

google connect (starbucks, other retailers)

google domain registration

google chrome/webkit

google wallet

google playstore

google CDN for serving code/fonts/libraries

google self driving cars are coming

=================================

I am not saying google is bad. Not even referencing this article because I think it was well thought out and there aren't that many people even saying this, although it is fairly intuitive, but the larger paradox is this:

Many people talk about some database being sloppy and how they would have it replicated, maybe even a cold backup in AWS glacier on top of the backups, and yet the world is using like 3 stacks: google, apple and microsoft.

Outside of this community, and a few other places, it would be highly unlikely someone is running their own openwrt router and have freebsd on their computer, fully encrypt all their emails and run their own mail server, don't own a cellphone and use garmin to navigate AND

all of their network does this too and therefore they are insulated sufficiently.


Over the weekend, Apple pushed an update to a kernel extension exclusion list that broke ethernet for thousands of users. There was no notification of this update, it simply got pushed and installed, suddenly ethernet broke - as in, not even visible to the system because the ethernet driver (kext) was disallowed from loading.

So... yeah it's a mistake, annoying. But we're in some sense expecting much more privacy, security, reliability of our mobile devices than our desktop systems. And I think that's an interesting shift in expectations.


Maybe it's time to ask Google to ask for permission before updating Chrome?


A system update is how Apple would install a backdoor, if they were forced to.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: