Hacker News new | past | comments | ask | show | jobs | submit login
Intel Management Engine cleaner (github.com/corna)
223 points by BuuQu9hu on Jan 17, 2017 | hide | past | favorite | 71 comments



I remember the furore when Intel introduced a unique CPU identifier in the Pentium 3. Now they have a microprocessor and OS running as a computer subsystem on every Intel server and laptop which is able to do pretty much what it likes with your system and people don't bat an eyelid.

Don't get me wrong, I love some of the benefits like remote control of my servers, fan speed control, monitoring of temps and voltages etc. but I do think we've given up a LOT of security for the sake of convenience. It wouldn't take much for a well crafted worm or heck a simple script to disrupt a whole network of ME/AMT enabled clients and servers.


> but I do think we've given up a LOT of security for the sake of convenience.

Worse: There is no reason you couldn't have both. For a start: A simple "off" switch for Intel ME would please a lot of people.


One of the features is anti-theft, which reminds me that the war against laptop theft is ridiculous.


I theory you could add a BIOS password, turn off booting from USB, sign your Grub bootloader, import the signing key, use full disk LUKS (including boot) via the Grub-Luks unlocking module (you need a key for your partition in your initrd too if you don't want to enter your password twice) and enable TPM.

In that setup, even if someone stole your laptop, it would be unusable, short of opening it up and fully resetting the BIOS/UEFI to its factory state, correct?

(My laptop has full-disk LUKS, a BIOS/UEFI password and doesn't boot from USB, but I still have TPM disabled and haven't signed my Grub bootloader yet.)


It is about tracking stolen laptops I think.


remind me a single time the police went after your stolen device even if you have gps tracking.

this is a clear "think of the children" excuse.


>A simple "off" switch for Intel ME would please a lot of people.

But not a corporate security team, of course. It's not that simple.


The needs of a corporate security team are the same as the needs of the free-software-loving individual user: the actual owner of the machine needs control over the computing being done on that machine, and someone who has temporary access to the machine, including an authorized non-owner user with physical access or malware that temporarily gets access (even perhaps root access) to the machine before being removed / before the machine is reimaged, should not be able to subvert the computing freedom of the owner of the machine.

A corporate security team wants to make sure that whether ME is on or off, it stays that way and can't be flipped by someone with temporary access or by malware. That's exactly what individual end users want too for their computing freedom.


I start from the premise that a computer should not betray its user.

Users often aren't the owners of their computer. Currently that means the computer may act against the interest of its user. I'd rather have a world where users can trust their computers even at the expense of the owners' control.

While a physical ME switch is vulnerable to physical attacks, it does seem like the best approach we have today for minimizing owners' ability to subvert the user.


That's a worthy goal, but so are "A computer should not betray me if someone else is a user of the computer temporarily" and "I should have full freedom to modify a computer I own." And I think you can't satisfy all three at once.

You can satisfy "do not betray the user" and "do not betray the owner" if you have a locked-down computer like a Chromebook that only accepts signed firmware and signed OSes. That way, no user of the computer, owner or otherwise, can make permanent changes to it that will compromise other users; guests are protected from the owner just as much as the owner is protected from guests.

You can satisfy "do not betray the user" and "have full freedom" if you have a computer with no protections against OS modification (the traditional PC architecture), but that is susceptible to physical attacks, where a user can subvert the owner. And every user needs to subvert the machine afresh in order for the computer not to betray them to the previous user; this is a risky model for things like shared computer labs.

You can satisfy "do not betray the owner" and "have full freedom" with an architecture where the owner gets a password/key that unlocks management functions or BIOS reconfiguration, and physical access (short of desoldering) is insufficient to access those functions.

I'm a fan of that latter model. Of course, I do have the resources to own a computer of my own for my personal computing, which biases my preferences. There's a good argument for building a more just world for people whose only computers are shared computers (libraries, work-issued laptops, etc.), but I think smartphones are becoming ubiquitous enough that we can expect that everyone will soon have a device they own for their own computing.


> You can satisfy "do not betray the owner" and "have full freedom" with an architecture where the owner gets a password/key that unlocks management functions or BIOS reconfiguration, and physical access (short of desoldering) is insufficient to access those functions.

The problem with this model is that it destroys freedom as soon as the user isn't the owner. That is somewhat what corporate IT departments are looking for, but it's even more problematic when a person goes to the store and brings home a device that some corporation has already appointed itself "owner" of. In other words, the third model devolves into the first.

And it doesn't really buy you anything. If you use FDE and erase the decryption key from memory when not using the computer then you don't need hardware to protect your data, math is already protecting it. Past that you can only want to prevent compromise of the boot loader, but in that case detection is just as good as prevention which makes prevention an unnecessary evil.

And lab environments don't need this either. There you put padlocks on the computers after setting the flash write protect jumper, don't even install hard drives in them and boot from the network. It's the same situation -- detection is more important than prevention. Someone can cut the lock but then you know that computer is compromised.


It's a large topic and you're right, we will have to make trade-offs. For the most part I'm okay with giving up "I should have full freedom to modify a computer I own" in cases where I voluntarily allow others to use it.

One possibility I like to think about for PCs is replacing usernames with a root of trust (hash). A trusted ROMed hypervisor downloads the configuration (or loads from cache) and netboots.

But ultimately it's bigger than PCs. Embedded computing is becoming pervasive and allows easier monitoring and control of what people can and cannot do. Today people accept that owners or manufacturers can set whatever policy they like even if it's user-hostile.


Sure it is, it's just that "off" needs to be done by Intel and sold under a different SKU.


It doesn't even need to be that. Expose it on a header on the motherboard so end users can add/remove a jumper. Same as should be in place for the write enable pin on the system's firmware EEPROM. The fact that the end user has no way to prevent firmware updates on most motherboards is a fucking travesty.

Businesses can simply tell their employees to leave it enabled or find another job.


But when it gets stolen, the thief just switches the jumper.


Hardware "off" would be expensive due to low demand. Soft "off" wouldn't be trusted by those who want it "on".


It could be physical switch on the motherboard.


Because the CPU ID came at the height of the last big media DRM push, and was seen as tying directly into that.

The IME in contrast is pitched at corporate and government systems to allow sysadmins to better manage things. Effectively LOM for the corporate desktop.


>"and people don't bat an eyelid"

There's been quite a bit of controversy over Intel ME, in discussions with those that know about it at least, so I don't think that's a fair assessment.

It should have been distributed as an optional, removable hardware dongle. There was no need to bake it into the hardware. Same for AMD PSP as well.


Can I even use it to remote control my server? HP have iLO, but it's completely different beast, AFAIK. I've never heard about using Intel ME for servers.


Intel AMT (~= IPMI) is a part of the workstation/server ME. It is comparable to iLO but it's used primarly in enterprise and not for servers.

If your PC doesn't use an Intel ethernet/wifi chip you're 100% safe from remote execution attacks because AMT only works with Intel chips.


ME is where the IPMI implementation lies. It can be used over the network to monitor sensors, to access the serial port and so on. It also has an HTTP server providing a web interface to the same functionality.


This is false. IPMI is a completely different thing from the Intel ME. It is implemented on a separate chip (bmc) and has its own NIC (usually with its own physical port)

AMT on the other hand provides its network functionality by hijacking the host's IP, and siphoning off traffic destined for its ports (16992 and 16993). This is one of the things that makes it scary, and just one of the differences between it and IPMI.

Source: I have worked extensively with AMT and built a semi-open-source Python management library for it in a previous job.


Are you sure of that? One most of the systems I've used with IPMI functionality (oracle and supermicro), the BMC resides in a separate chip with it's own processor and RAM.

https://www.thomas-krenn.com/en/wiki/Nuvoton_WPCM450R_IPMI_C...

Oracle uses these:

https://www.aspeedtech.com/products.php?fPath=20&rId=376


Are you aware that the newer Xeons, and only the Xeons, have such an serial number facility available?

Control over this processor identification facility is supposed to be offered as a user-accessible control in UEFI, to "enable, leave disabled but unlocked, and lock disabled". Once "lock disabled", it cannot be enabled back again until a hard reset happens.

Also, unlike the Pentium III "processor serial number cpuid", you need to read a MSR to access the new version, so it is supposed to be restricted to the O.S. kernel, which could then either make it available to regular programs or not.

It is not easy to find out about it, either: it is hidden in plain sight on the public Intel SDM (the processor's manual). You need to look over all the model-specific MSRs with a magnifying glass until you find it :-)

In the end, it boils down to whether your favorite O.S. cares about your privacy or not. It can lock the thing disabled at early boot, denying access to everything other than UEFI.


"I remember the furore when Intel introduced a unique CPU identifier in the Pentium 3. Now they have a microprocessor and OS running as a computer subsystem on every Intel server and laptop which is able to do pretty much what it likes with your system and people don't bat an eyelid."

20 years ago most people would have sued anyone revealing to the world what they and their kids are happily telling on Facebook today. Give them some more years and they will pray to have a SWAT team guarding their bedroom when they're sleeping.


There will be an AI to take over all these computers.


A story:

One of my coworkers told me he previously worked on the motherboard team for Intel and was regaling the office with stories about testing the motherboards against the latest games. Apparently many of the Intel engineers personally tested overclocking the processors to see how much they could take before they fried.

At some point I asked him if he knew anything about the Management Engine. He said that they were starting to become a thing right around the time he was leaving, so he didn't know too much about them. I said, "Well, there's a bunch of paranoid people that are uneasy about it because, to my understanding, it's a blackbox second processor and we really have no idea what it's doing."

Without prompt, he replied, "Oh, you mean like the FBI using it to listen in? Yeah, that happened. It happened on AT&T's when I went over there as well."*

That's my story. I apologize if it looks like I'm spreading FUD here, since I'm not going to give up my name or my coworker's to verify the story, but I sincerely doubt that Intel would acknowledge such a capability anyway, so, take it as you will. Libreboot, the Management Engine cleaner, and other such projects need to exist. My coworker's comment convinced me of that.

* I'm not sure what he's referring to here. Maybe someone else can shed some light. Does AT&T make processors?


He may be referring to the illegal / gray area tap rooms: https://en.wikipedia.org/wiki/Room_641A


I'm skeptical that the FBI have the technical capabilities to turn the IME into a trojan horse (in the biblical sense). More likely NSA-levels of sophistication. Plus there's also the issue of doing the actual implants (either via postal intercepts, physical access, or breaking into networks) which are illegal for the FBI to do but semi-legal for the NSA to do against foreign targets.


If Intel was cooperative, then all the FBI would need to do is ask them to help. That would fit with an Intel employee being aware of it happening. Pure speculation on my part, of course.


Intel wouldn't even have to be cooperative with an NSL in place.


Can't the FBI request services from the NSA?

And there have been versions of the Intel ME in the wild with known remote exploits AFAIK.


Isn't this a two edged thing in legal courts - if there is a widely available backdoor to hack Intel systems, how can any digital evidence be taken without some measure of doubt?


That is the whole "parallel construction" debacle. Once they have the evidence, they come up with a cover story for how they could have legally obtained it.


Yes, although this was made legal only recently.


Doesn't mean it wasn't already happening at some level.


If Intel added the feature for use by groups like the FBI, they wouldn't need to reverse engineer or hack anything. Intel would hand over the keys.


Background, ie why you might want to clean Intel ME: https://recon.cx/2014/slides/Recon%202014%20Skochinsky.pdf



And an obligatory link to CoreBoot, which I believe has a bit more history than Libreboot:

https://www.coreboot.org/Intel_Management_Engine


Libreboot is just a more strict about freedom coreboot from what I understand. As in it is literally a fork/derivative of coreboot.


Ah got it. With the recent GNU drama of libreboot I wonder how viable it really is as an ecosystem.


Personally I think the FSF ideology taken too far is pretty ridiculous anyway.


Yet here we are, discussing the secret OS that runs on its own chip and can't be disabled by the end-user.

The world is a pretty ridiculous place, even with the best efforts of those ridiculous FSF zealots. I don't always agree with them and their methods and communication skills aren't perfect, and it's a shame to see coreboot/libreboot get fragmented etc... but imagine how backwards the world would be without some people taking things 'a bit too far sometimes'.


Another difference is that Coreboot is essentially a rolling release, whereas Libreboot periodically takes a snapshot of upstream and maintains it for a while. Libreboot is therefore something like a long-term service release.

Anecdotally, I also found Libreboot easier to build. There is less configuration involved.


It's a distribution of coreboot + GRUB, 100% free of any proprietary blobs. Not really a "fork".


Here's a video of him presenting these slides https://www.youtube.com/watch?v=Y2_-VXz9E-w


The ME causes problems for embedded system makers too. If you are making physical devices with Intel CPUs you probably want to replace the Intel MAC addresses with your own. Then you have to fight with the ME to keep it from trying to talk to the internet by itself, using the Intel MAC. If it does, that will freak out some corporate routers. They see two MAC devices on one port and assumes someone has plugged in an unauthorized ethernet switch, and disable the port.


Hmm, that behavior might enable a neat non-firmware mitigation: replace the Intel MAC address with your own and block all packets originating from the Intel one.


Hmm, that behavior might enable a neat non-firmware mitigation: replace the Intel MAC address and block all packets originating from it.


Doctorow's Law: "Anytime someone puts a lock on something you own, against your wishes, and doesn't give you the key, they're not doing it for your benefit."


> It should work both with Coreboot and with the factory BIOS.

So, basically any computer?

Also, how safe is it?

EDIT:

> Bricking is very likely to happen! Just in case you didn't hear me the first time


It looks like its wiki sheds some light on the details:

https://github.com/corna/me_cleaner/wiki



One thing I asked before but didn't get a definitive answer on: IIRC the IME images are cryptographically signed, right? If so, how can they modify the images without breaking the signature? Or can they resign it?


As far as I know, only the code is signed, not the partition table. This means that you can freely modify the partition table and completely remove some modules.


Please check the wiki for more detailed information: https://github.com/corna/me_cleaner/wiki/How-does-it-work%3F...


Well, what happens if you break the signature? - the IME does not start up... - Exactly what we wanted.


No, if you break the signature, something (processor microcode or the IME boot block?) will notice, and the system will either not start, or shutdown after ~30min.

There is microcode-level integration between the IME and the system processor in an Intel system. It also involves the platform TPM (which could be an IME module), on systems where Intel SGX or Intel TXT is active.

OTOH, if the non-critical IME modules are missing, the system boots and goes on working just fine. Since the IME "partition table" is not signed, this allows you to remove the undesired modules such as Intel AMT.

It is actually possible for a system integrator/motherboard factory to request from Intel a minimal IME build that lacks AMT, you know.

It is also rather trivial to disable the (documented) IME path to the network: don't use the chipset-embedded ethernet MAC (as in midia access controller, not MAC address). That requires adding a full LOM NIC chip and taking up precious PCIe lanes, instead of just adding a (much cheaper) LOM PHY.


One thing that concerns me is that it says it removes parts of the ME responsible for "silicon work-arounds". In other words, microcode patches. These are important for patching processor errata.

It would be better if there was a way to allow retaining/updating microcode patches for errata, while eliminating all the other nonsense.


It is my understanding that those microcode patches can be applied by the OS as well at runtime.


Sort of. Removing the firmware-provided microcode update would likely kill Intel SGX support, though. And if UEFI is set for secure boot and its implementation happens to uses SGX or TXT for that, it probably will brick the box until one restores its system FLASH to a valid state.

You can further update the processor microcode later, from UEFI or from the operating system. But the all-important initial processor microcode[1] update has to be done from a trusted path [for Intel SGX to be available].

And no, it is not UEFI early boot updating the microcode as it used to work in the past: when doing a secure Intel SGX boot path, the microcode looks up the FIT table in system FLASH, to get the address of a microcode update table in system flash. If found, the microcode then proceeds to self-update from FLASH.

This whole auto-load-microcode-from-FIT thing seems to have started with Skylake. Since Intel SGX is almost entirely implemented in microcode, it makes a lot of sense.

ON INTEL SKYLAKE AND LATER, you can notice whether the current microcode update came from FIT or not by reading its revision number. If it is odd, it came from the FIT. If it is even, it came from regular UEFI or O.S. updates.

https://review.coreboot.org/cgit/coreboot.git/commit/?id=504...

As for recent Intel systems without Intel SGX (or with it disabled), you still likely need a firmware-provided initial microcode update or it will be hopelessly buggy. Chances are it won't be stable enough to actually boot the target O.S.

[1] Intel has been shipping its processors with shamelessly incomplete/buggy factory microcode for a while now. You pretty much have to install a microcode update, or what you will get might not even qualify as a X86-64 compatible processor. The factory microcode doesn't need to know how to do paging or hardware-assisted virtualization correctly at all, for example, or even implement SIMD and floating point math instructions. All it has to do is to be able to run enough of BIOS/UEFI to get the initial microcode update.


I wonder if this would work on a Mac. Looked around briefly but did not see anything indicating support.


This is admittedly unfounded speculation, but if the ME were compromised, one of the most valuable things an attacker would want would be AES keys. So _IF_ this binary blob is up to no good, we should expect one of its tasks to jot down AES keys (remember, intel has had specialized AES instructions for a while now) and hide them somewhere for later retrieval.

Perhaps it would behoove people to run a "keyscrubber" daemon that constantly generates random numbers and AES-encrypts a few kb of data and sends it to /dev/null. _IF_ the ME is jotting down keys, and _IF_ there is only a finite amount of space for it to store them, such a daemon might be able to flush out real keys and overwrite them with junk to hamper any retrieval efforts.

Again, two big hypotheticals, but, if it were stealing keys, the buffer would have to be pretty small, otherwise people could eventually detect it changing sizes or hashes in NVM. I wouldn't expect such a thing in standard redable NVM anyway, it's probably on a die somewhere and only a few kb.


> This is admittedly unfounded speculation, but if the ME were compromised, one of the most valuable things an attacker would want would be AES keys. So _IF_ this binary blob is up to no good, we should expect one of its tasks to jot down AES keys (remember, intel has had specialized AES instructions for a while now) and hide them somewhere for later retrieval.

If you want to get into the speculation rabbit hole, it's far more fun to assume that instead of calculating something_truly_random(), RDRAND actually returns `AES-CTR(something_truly_random(), NSAKEY, counter)` as random output - which then becomes P & Q for your RSA keys. This'd be a far easier attack to mount, and wouldn't require any exfiltration of keys; just being able to see any current RNG state would expose all the previous states.

This might not be a problem for Linux, since RDRAND is mixed in, but for a while RDRAND was the exclusive entropy source for TCP sequence numbers, ASLR offsets, etc: https://cryptome.org/2013/07/intel-bed-nsa.htm

Who knows how Windows or OS X handles this.


>just being able to see any current RNG state would expose all the previous states.

wow, that's insidious. I'd never even considered something like that!


> So _IF_ this binary blob is up to no good, we should expect one of its tasks to jot down AES keys (remember, intel has had specialized AES instructions for a while now) and hide them somewhere for later retrieval.

It would have to be Intel doing this. A lot of hackers have tried to write their own modules for the ME, but to my knowledge no one has been able to run one successfully. [0]

> _IF_ the ME is jotting down keys, and _IF_ there is only a finite amount of space for it to store them, such a daemon might be able to flush out real keys and overwrite them with junk to hamper any retrieval efforts.

There is. The ME on consumer systems has to fit within 1.5MB, and on corporate systems (e.g. servers) within 5MB. The only persistent storage the ME has accessible is the SPI flash (which has the above size constraints) or the TPM (if there's one installed). To my knowledge, the TPM can only be used to authenticate a secret once stored, you can't ever read it back from the TPM.

Otherwise, you'd have to have malware which runs inside the OS to receive the data from the ME and write or sends it somewhere else.

But really, why bother going for the ME? It has a root of trust burned into the PCH silicon, and it's obviously in Intel's interest to keep the signing keys very secret.

There's much lower hanging fruit if someone wanted to steal AES keys. Just install malware into UEFI or another controller (e.g. SSD, hard drive). This has already been documented in the wild, the source is speculated to be state-level actors (e.g. US, Israel). UEFI is handled by the OEM, which means they put the least amount of time/money into it as they possibly could.

Just search online for BIOS/UEFI password bypassing tools if you want a sample of how terrible OEM implementations are.

[0] http://www.slideshare.net/codeblue_jp/igor-skochinsky-enpub


sorry, I minced my words a little bit. what I mean is if the ME were already programmed to do these nasty things by design, one way to remedy such an undesirable "feature" would be to constantly run AES with new keys each time.


> what I mean is if the ME were already programmed to do these nasty things by design

Yes, and this is possible, though I would consider the possibility remote.

This would require collaboration between Intel and three letter agencies. Additionally, since the ME firmware is stored on SPI flash, anyone who suspected their ME might be eavesdropping could contact a researcher to dump their firmware for analysis. So I highly doubt Intel would be shipping this functionality in every ME firmware release.

Deployment would likely be targeted against a specific individual/group.

I'm not saying it couldn't be done, but the ME is not entirely a black box. The CPU architecture of the ME is known, the firmware can (and has been) dumped and analysed (though not all modules as some are encrypted).

So, it's a possible attack vector, the but effort required is much higher than building persistent UEFI malware. I don't think anyone would go to the effort to build an ME rootkit when they could just do it in UEFI.


> collaboration between Intel and three letter agencies.

thats part and parcel for any large US-based IT company these days.

> since the ME firmware is stored on SPI flash,

When you're Intel, you make the IC's and you can put whatever you want in there. If they were in bed with TLA's, I'd expect the pilfered keys would be stored in a sliver of NVM gates deep inside the die (in which chip, I can't say, perhaps in the CPU itself) (and probably encrypted with their own secret key). That's why I suggest that it would be easy to overwrite them just by constantly running aes with fresh random keys.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: