This touches on what I think is one of the more interesting technological problems that we have to worry about right now. As a disclosure, most of the reason I think that is because I'm a PhD student doing research on hardware security.
Anyways, there has been a lot of interesting discussion on the topic. DARPA has had at least two programs dedicated to trying to solve this problem, IRIS and TRUST [1]. Both of them seemed to be more interested in tampering by third-parties, perhaps because it's not in their best interest to accuse the people designing their ICs of attacking them.
In the long run, verifying the functionality and intentions of software and hardware are probably roughly the same problem, with no clear solution to either in the foreseeable future.
> In the long run, verifying the functionality and intentions of software and hardware are probably roughly the same problem
Both require trusting the source code (a languages problem), as well as trusting the translator. In the case of software, the translator is an end-user accessible compiler/interpreter which is itself more software, thus recursively auditable.
In the case of hardware, the translator is an entire institution, which can only be trusted if you have recourse against said institution. As an individual end-user (uber alles) can then never fully trust their hardware, it makes sense to draw a line in the sand and proceed from that assumption.
(and suuure, put a picture of a pic16f84, the chip that started the revolution of microcontroller DIY, at the top of an article on dodgy hardware..)
> This touches on what I think is one of the more interesting technological problems that we have to worry about right now. As a disclosure, most of the reason I think that is because I'm a PhD student doing research on hardware security.
If you are a PhD student and aren't working on something you consider 'one of the more interesting ... problems' then you are doing it wrong. In my opinion. You seem to be doing it right.
> perhaps because it's not in their best interest to accuse the people designing their ICs of attacking them.
Would you agree that there is some value in having the capability to detect these attacks? Whether you trust the vendors or not, you need them to be aware than you can check their work, at least to encourage them to follow good security practices internally.
DARPA and others are concernced about this exact scenario and are funding research into reverse-engineering chips to detect these types of backdoors. There are two parts this problem. One part is using electron microscopes and lasers and whatnot to go from silicon to a netlist of gates. The second part, which I'm a little more familiar with, is "decompiling" these gates into higher-level structures like ALUs and multipliers. The hope is that we can identify maybe 80% of the circuit to be good/recognized using purely algorithmic techniques and then a human can dig in and look through the remaining 20% for anything suspicious.
They do seem to be more concerned about chips the US buys from certain other countries than about the likes Intel/AMD building in backdoors.
EDIT: I should also mention that this is not just a concern of the american defence. I'm aware of the indian govt also funding this sort of research with similar motivation. However, in this instance, the professor was trying to attack the problem through the lens of formal techniques. I think the idea was to prove that if the chip interacts with the outside world through these limited set of channels then you can't sneak data out through some sort of covert channel hiding in the "regular" communication. The specific concern here was about routers/switches and the like equipment sneaking sensitive data out of a secure network.
I'm not sure but I think you may be right because the researchers have been granted access to some IBM cell libraries. (I was wondering why IBM agreed to this, but this probably explains it.)
My understanding is that the main concern here are chips in COTS equipment bought from countries that are considered by some to be untrustworthy.
I just remembered another related case, the "Induc" virus which was infecting a library file from the Delphi distribution, and then every compiled program was infected. There were several pretty popular programs compiled on infected computers of the developers and spreaded by the world.
Yes. Is there any reason why US intelligence services wouldn't want a backdoor at that level? They are technically feasible, they promise to offer access that's difficult to detect and difficult to thwart. So there's an obvious incentive there.
What are the downsides? Well, for one thing, it might be illegal. Is it? Merely installing a backdoor could be illegal, but I sort of doubt it is. The hardware company might be compliant, or the backdoor could be installed covertly. That might make a difference.
And of course making use of the backdoor could be illegal. US citizens or other people on US soil might enjoy (various degrees of) protection against activities. But I get the impression that the rest of the world is pretty much fair game for US services. The fact that US citizens are protected is brought up again and again in such discussion and doesn't inspire me with a lot of confidence.
I have no idea why this is getting downvoted. The current political climate in the US is fucking abysmal, and bears little resemblance to a representative democracy.
Probably because the comment didn't contribute to the discussion at hand and had a high troll potential. It's unfortunate that the author decided to throw that comment into the article because it is naive sounding and seems quite out of place there as well.
I suppose it doesn't open up much possibility for discussion afterward, but I wasn't intending to troll. I just think we need to remember that we don't have to imagine some undemocratic international threat to understand why these issues are important.
I think it does strongly resemble a democracy. This reflects quite poorly on (my fellow) Americans.
Now, you could go with a proportional representation system instead of a winner-in-each-district-takes-all system, but you know who the two big parties with the power would be? The same two.
The make the decision making process democratic, i.e not depending on just the two major parties. Also lose the 'president' father figure -- just keep the few hundred elected guys.
I wonder what democracy in the country has to do with trusting the hardware company? Looks to me like some otherwise reasonable people have a democracy fetish: they mention it from time to time for now apparent reason.
Never mind us. Why should Intel trust Intel? Like any good computing company, I would imagine they are mostly self-hosting. The chips they built last year are the chips they use to run simulations and design the chips they put out next year. Backdoors can be exploited by any employee who knows about them, and it would be extraordinarily damaging for Intel to allow backdoors into hardware they depend on.
Even if they built in some sort of a kill-switch, how could anyone confidently say that a rogue engineer involved in the design couldn't bypass it and use the chip against Intel. Ultimately, I think there's so much danger that I have to assume Intel is competent enough not to do something so foolish as introduce deliberate backdoors.
I'm expecting someone to produce a fully open hardware stack sooner or later - there's already a freely available sparc processor design, and I recall some open-source people working on a fully open graphics card. (Of course you still have to trust your fab, but that's not very different from trusting your compiler).
>how is trusting your fab any different than trusting Intel?
You increase the cost of an attack - it's harder to change a processor's behavior by editing the mask than the VHDL. If you were super-paranoid you could source to multiple different fabs and run the chips you get back in parallel, with some sort of trap that goes off whenever you get different results from one or other processor.
>if you were paranoid enough to be worrying about CPU backdoors, why would you trust your compiler?
If you don't trust your compiler, why are you even bothering worrying about CPU backdoors when you've got a much easier attack vector open?
Who's to say you're going to trigger the condition that causes the backdoor? Seems very unlikely. If you have ideas on this, though, I'd be interested.
If you don't trust your compiler, why are you even bothering worrying about CPU backdoors when you've got a much easier attack vector open?
You may not trust your compiler, and therefore do certain things in a VM where e.g. access to network is limited. See [1].
It's hard to change the behavior of the CPU because it's in the middle of the chip and highly speed-optimized. It's much easier to add an extra chunk of silicon that affects one of the peripherals. I'd probably add something in the southbridge to spoof the BIOS flash.
Trusting Intel and trusting your fab are different problems. Intel creates a design and sends it to the fab. Intel has to trust that the fab will not alter their design, but in general that is an extremely difficult attack to carry out and would most likely require the fab to have a much more in-depth understanding of the IC design than they likely have. However, Intel can put whatever they want into an IC and it would be incredibly difficult for anyone to find it.
Also, trusting your compiler is different from trusting your CPU because one is much, much easier to check than the other. You can build GCC yourself, look at the source code, manually check the output. You could even write your own compiler. In general, we can't make our own processors or verify their internals, yet.
That's the problem : you would have to trust your assembler. How was the assembler compiled or assembled ? Etc etc. You would go back to the first "physical" translation of a program "into" a computer.
You'd have to write an assembly program and then a hand-translated binary version that can run directly on the bare metal with no OS. And use it to compile the "real" compiler.
I wonder if one could make that simple enough to do be "somewhat reasonable," yet complex enough to compile the real compiler. Probably! Though it may take a team of people quite some time.
I wonder if government(s) have infrastructure/teams that do this already, or if there are any open source projects aimed at this kind of thing.
1. Trusting chip fabs is a non-starter. Institutional security only helps large institutions. You will not audit your fab, nor can you trust other people to do it for you. Unlike software, there is no "certificate" that is easily verified in the event of a backdoored fab.
2. There's two phases in the lifecycle of a backdoor. First, is its latent 'offline' period which is noninteractive with respect to the attacker. For example, a backdoored compiler [RoTT] basically propagates a virus to new copies of the compiler. Solving this seems tractable by eliminating bootstrapping as the sole method for compiling a compiler, and research into interpreted languages that are easily ported to new platforms for the stage0 compiler.
3. The second phase is when the introduced vulnerability is 'active' and ready to be exploited over the network by an interactive attacker. The case of a network-facing compiler-infected binary should be easily solved by solving the first problem. The case of backdoored hardware/microcode is much more insidious, and requires coming up with assumptions to frame the problem for even a chance at being tractable. (Also, it requires the trustable software tools from (2) for implementing the solution)
4. Secure 'offline' computing on its own would be a boon for things like maintaining the integrity of master cryptographic keys (although keep in mind one still has to prevent against a backdoored CPU acting as an infected stage0, but this is much easier in a noninteractive setting). I'm less familiar with state of the art in this area than I should be, but I'm guessing implementations are probably still in the dark ages of 'trust the company'.
I think that it boils down to writing arbitrary data in memory (so that they can't follow the usual path of programs, as with compilers). For the anecdote, a friend of mine used to flash a "hello world" program into a microcontroller using push buttons only :)
Intel owns outright most of their fabs that produce their processors. Intel is the fab, hopefully they trust themselves.
AMD on the other hand, is going fab-less for at least some of their products. TSMC was mentioned in the past as being a partner. So you'd have to trust both AMD and TSMC.
Most people would consider OGP worthless because it's 2D. Anyway, the economics of open chips don't work; due to (dis)economy of scale they cost more than proprietary chips.
The irony is that electron microscopes run on computers, too. And they are probably even networked.
So really you can only trust an analog, optical microscope. Which, also ironically, is not quite good enough to resolve individual transistors (being limited to about 200nm or so, in green light.)
Last but not least, our CPUs are always designed by other computers, so it's theoretically possible that a backdoor could propagate itself forever.
One thing to note is that having good perimeter security makes exploiting hardware backdoors much harder. I mean if you are monitoring all of your internet traffic then even if somebody with an access to a hardware backdoor tried to steal data or log your activities the traffic caused by those attempts would be caught at the perimeter.
Problem is real, solution not so much. By hardware memory protection drivers will be isolated, but you cannot protect yourself from backdoor logic built into the hardware.
It seems like you're trying to criticize the article, but I think the article is in agreement with you. As far as I understand it, the author does not present a solution to the need to trust the processor and the hardware that moderates memory access.
Yes a driver written that uses IOMMU/VT-d for firewire devices will prevent attacks over it from dumping memory and recovering keys, assuming that the memory isn't reused and contains it already etc. Having this combined with a quick way to zero out whatever DMA region is being used would be about as foolproof as you could expect anything to be for protecting you from this kind of attack.
Note that you also have to trust the /tools/ that generate the circuits. Nobody's doing to check every single gate on the chip against the source code; it would be easy for a VHDL compiler to lay down extra stuff.
Shades of "Reflections on Trusting Trust," but in hardware. Doesn't have a complete replication loop, though, which would have the compromised hardware re-infecting the very VHDL compilers that generated the chip backdoor :-)
At one point I remember he said it doesn't matter that we can't see the source code running on remote computers because they're not rightfully in your control. It's just something you're connecting to with something you do control. You have the potential to check on your safety because you can see everything going in and out of your computer.
Regarding open source, I think the point about security is not so much that you will read the entire source yourself, but that the reading of the source is, like its writing, a collective enterprise. If there's a backdoor, somebody at some point will see it.
In the long run, verifying the functionality and intentions of software and hardware are probably roughly the same problem, with no clear solution to either in the foreseeable future.
[1] http://www.wired.com/dangerroom/2011/08/problem-from-hell/