Hacker News new | past | comments | ask | show | jobs | submit login
Uncorrectable freedom and security issues on x86 platforms (2016) (decentralize.today)
289 points by shirian on March 3, 2017 | hide | past | favorite | 135 comments



Isn't it sci-fi-level incredible, and frankly both scary and shady, that every modern x86 CPU has this forced sub-ring-0 control program? And that the CPU vendors apparently go to extreme lengths in hiding its functionality? Why would even large vendors like Apple or Dell agree to this?

The 30-minute timeout is particularly mischievous. It's like they REALLY want to slow down any effort at patching out the ME.

Are we going to have to wait on an insider leak on what's the real deal here? Or have I completely missed out on a perfectly good excuse for what's going on?


NSA 100% . Some time around 10 years ago governments decided the internet was too "dangerous" to be free. Arab spring cemented that into their minds, and now a bastion of free thought has become the worlds biggest spying apparatus.


Please. While no-doubt the NSA take advantage of this probably-insecure privileged processor, I seriously doubt they were behind it. Secure boot is an obvious business need and Intel and AMD clearly implemented it in the laziest way possible. And by laziest I mean: nobody is going to argue with you in a meeting if you say "we don't need to release the source code for this".

Seriously, anyone who has actually worked in a real company knows that it is a huge amount of effort to get source code released to the public, and if any of it is licensed from third parts it is probably near-impossible.


We're not talking about secure boot, we're talking about Intel Management engine, which is a totally different story. You do not need ME at all to implement Secure Boot feature. Actually, no one really knows what Intel ME is doing, and _that_ is a huge problem.


Though I agree that there is a high likelihood that there is nefarious activity going on, I also ageee that there are legitimate uses for extra processors and firmware.

Take microcode for example. At one time (as I understand it) microcode was not a signed blob. However companies wishing to hide details of their microarchitecture chose to encrypt it.

My guess is that these encrypted blobs grew first out of corporate closed source culture, which is strong in HW companies. If they are subverted with actively malicious code it was probably by secretive efforts, not the NSA simply propositioning the HW manufacturer.

Finally I'd like to point out that unless you design your CPU chip yourself and oversee the layout of it on the die, it is also possible that the semiconductor manufacturer you hire could embed their own nefarious processor within your design.

In practicality, I think running RISC V on an FPGA would have a very low risk of subversion. Though the FPGA design tools might add nefarious logic too.


The businesses which pay extra for vpro know at least some of what ME is doing.


Try using Intel ME. Than come back and tell us that's a tool for mass surveillance.

If you think there's a evil NSA front for this type of stuff -- its Absolute Software. Their bits have been embedded in most BIOS packages since the 90s, and nobody has heard of them.


Absolute with their Computrace product is selling what amounts to a hardware rootkit, which can be enabled with a simple unprivileged exe or bash script.

You can wipe, encrypt, lock, view & kill processes, retrieve any file and view every file on machine, and view hardware & software status and licensing. It also incorporates a bunch of other features, but those are what scare me most.

This is only made worse by the fact that it is readily exploitable: https://threatpost.com/millions-of-pcs-affected-by-mysteriou...


It certainly seems unusual that this software exists, but only from a single company. You would think HP/Dell/Lenovo/etc who are desperate for services revenue, would be making a similar technology if it was valuable.

A past employer looked into the product and had a reasonably high level engagement. We never got complete answers to many questions, and the company itself didn't feel particularly large. Granted we disengaged when we couldn't make the ROI work -- we just don't lose many devices. It seems unusual that a teeny company from Vancouver that nobody has heard of can navigate the bureaucracy of massive PC vendors and Asian suppliers of motherboards and android SoCs for decades.

It also seems weird when you consider that Intel, despite having a near monopoly on x86 and the ability to get other mega corps to put Intel stickers on things, (and even push them to make Atom phones that nobody wants!) gets comparatively little love for its management layer.


Intel's ME/vPro can do a lot, but its nowhere near the fit and finish of Computrace. They aren't very good on sales, but once you become a reseller there are a ton of features you can access & use to manage your computers.

The reason Absolute is Vancouver based by the way is the Canadian Govt gives massive tax breaks to software companies, hence why a ton of point of sale and other software companies are based just to the north.



So why not build our own network with crypto, blackjack & hookers on libre hardware? Say on an OrangePi PC2 with a bunch of high gain USB 5GHz radios attached, and throw some spinning rust on there so you can run a Nextcloud instance and/or join your local Ceph cluster/IPFS.

We have CJDNS (which salsa20's all your data & can VPN legacy networks to ya), fully FLOSS SBCs for under $20ea, and 802.11n and AC outdoor radios can be had for cheap, this is merely a community involvement problem.


I'm a software engineer myself - but I can't even contribute to projects such as lowRISC - is way beyond my abilities. Right now I'm learning to program microcontrollers, and I want to learn about FPGA's as the next step.

I also work at a company that has exactly this focus - to sell, and eventually produce devices that can be run with free software from top to bottom - but I don't see ourselves producing our own devices in the next 5 years, even if we would become wildly sucessful.

The hope seems to lie with ARM for the moment - C100 / C201 have even the Embedded Controller (EC) code avaiable - but they do have plans to implement something simillar to ME, AFAIK.


Yeah, I'm not advocating for building our own SoC, as just taping out a chip is tens of millions, instead I'm advocating for using inexpensive Arm64 chips that already are a known quantity (firmware free, mainlined drivers) to build a fast & secure network, and scale from there.


The issue is not widely known, silicon is very costly to manufacture, and most people frankly don't care, as long that spying is unobtrusive (and hell it is so).

Also, most people already are living with the thought that their computers are cracked/hacked/virused the moment they are connected to the internet - all my friends and relatives ask me to check their computer for viruses - almost none trust their computers or phones (especially Android phones, it seems). For such people, where this is the natural state of the world, it's very hard to imagine that they can change anything about it - and telling them that there are backdoors from the moment the laptop is assembled, doesn't help much.


Sure, but the silicon & libre drivers already exist and don't need to be manufactured, so at this point its a marketing problem of selling a more secure computing box.


Anyone trusting a computer - any computer - is a giant fool in my book. Trust is a strong word, and computers suck balls fundamentally at keeping information safe.


tomesh is literally doing CJDNS+OrangePiZero+5GHz+WAP+802.11s to get the most inexpensive yet performant meshing node. Come chat via Matrix at #software:tomesh.net


Mmm, how is throughput? I know that on an OrangePi PC (same Allwinner H3) I was getting 5MB/s for a point to point transfer, didn't check relay throughput though. I assume the H5 would do better with gigabit and 2.5x better NEON performance, but I've yet to test (just got kernel 4.10 built for it & booting Debian reliably).


Seems to be good, here are some iperf3 tests over the link:

OPi with no build flags: [ ID] Interval Transfer Bandwidth Retr [ 4] 0.00-120.00 sec 290 MBytes 20.3 Mbits/sec 165 sender [ 4] 0.00-120.00 sec 290 MBytes 20.3 Mbits/sec receiver.

OPi with optimal build flags: [ ID] Interval Transfer Bandwidth Retr [ 4] 0.00-120.00 sec 366 MBytes 25.6 Mbits/sec 141 sender [ 4] 0.00-120.00 sec 366 MBytes 25.6 Mbits/sec receiver


>NSA 100%

What about the other powers? China, France, Russia have their own NSA's that would be asked to provide solutions to protect all the PCs in the service of their own governments, what are they doing about it?


also how can the American NSA trust the manufacturing process? Couldn't the Chinese counterpart of the NSA possibly reprogram the code of this processor when it was manufactured? Reflections on trusting trust that is...


Take sample processors, uncap them, x-ray them, compare the designed circuitry with the x-rayed one. That would be a way.



It's the kind of stuff the boring, slow-moving, TCG group were talking about long ago for improved DRM and security. Then they advertised their backdoors publicly as vPro while people here argued why Intel rand was trustworthy (lol). They sold that as management benefit for enterprises. Eventually those gradual changes got to the current point.

Although I recommend switching off x86, I don't buy the claim that we cant do another x86 without backdoors. For one, Intel or AMD might do a "semi-custom" design without one for a price. Second, Centaur has long been the 3rd player in x86 for low power stuff sold by VIA. They'd probably do a high-performance design if incentivized. Third, many x86 players showed up over time that simply failed in market or were acquired. Nothing stopping another unless there's a legal restriction Im unaware of.


There is DMP x86 (no x86_64 though) CPUs manufacturer: http://www.dmp.com.tw/


Never even heard of them! Thanks for the tip!


Below ring 0 and in addition to ME you have the less known "ring -2". https://en.wikipedia.org/wiki/System_Management_Mode

There is also microcode, and many patches are issued through microcode.


More details on this http://wiki.osdev.org/SMM


Happy to see more attention to this problem. The FSF called it out back in 2014, as well:

https://www.fsf.org/blogs/community/active-management-techno...


Should point to: http://mail.fsfeurope.org/pipermail/discussion/2016-April/01...

Canonical presentation: REcon 2014 - Intel Management Engine Secrets (Igor Skochinsky) https://www.youtube.com/watch?v=4kCICUPc9_8

Decoding ME firmware in BIOS updates until Skylake (2015): http://io.netgarage.org/me/


While it would be great to liberate x86, more than 4 billion people in the world use mobile phones, and phones are beyond saving. x86 is in very healthy shape compared to the clusterfuck that is the smartphone industry. If you think that coreboot is a fringe project, you need to head over to replicant or neo900 to see what the fringe actually is.


Well, then you can go deeper and check out baseband firmware liberation project - OsmocomBB [1], since baseband firmware in a far worse shape than applications firmware (and TrustZone firmware) in smartphone industry.

[1] https://osmocom.org/projects/baseband


I don't think baseband can ever be truly free because of regulatory issues. The only realistic way of containing it would be through isolation.


You could also change the regulatory framework.


So the revolutionary wars didn't happen?


I thought Apple was working on their own baseband chip? I assume that's what you're alluding to, yeah?


... I confess I'm very frustrated reading about how trusted computing modules hurt the cause of FOSS but no alternatives to actually try and carry out cryptography to execute trusted code.

Inevitably the complaint is, "Well if they have physical access you're screwed anyways." And I just don't understand how anyone can maintain that farce when the last year has shown that it's a genuine challenge even for the US FBI to unlock a mobile device without the owners say-so and it's getting harder all the time.

If you truly believe that physical access is a trump of any security then you can never trust your hardware anyways, as it is exceptionaly hard to prove it conforms to a spec.


I'm not sure I understand your argument here, and I'd like to.

For physical access, I though the case was "If anyone has access to your device while unlocked, or locked but not disk-encrypted, consider it permanently compromised. If anyone has access to it while disk-encrypted, consider replacing it if you're very concerned." The 'permanently' bit is for unknown firmware compromise, and this position seems pretty sane.

But trusted computing modules are something else altogether. Even non-physical access can compromise them. There's some evidence that they can be compromised around a fully-encrypted disk. And checking whether they're compromised is effectively impossible.

Yes, it might be possible to execute trusted code around the module, if it never hits the machine in a vulnerable form. But that's slow, non-interactive, and virtually nonexistent at present. Right now, trusted computing modules do compromise machines are roughly Ring -3, with no real recourse.


The FBI issue wasn't a technical issue. When they gave up on grandstanding, the phone was cracked in hours.


They purchased a hack for an old phone, according to the stories I read.


> And I just don't understand how anyone can maintain that farce when the last year has shown that it's a genuine challenge even for the US FBI to unlock a mobile device without the owners say-so

Difficulty? Yes. But the FBI is not the NSA, they don't specialize in such attacks. It's like asking your plumber to do heart surgery. So they commissioned it to else who does, and boom, they had access.

Strong cryptographic security shouldn't have a pricetag any lower than "we feed all the hydrogen in the universe to black holes to harvest enough energy for the computations".

And phone security is orthogonal to baked-in firmware signing keys. The only change you need is allowing the user to add their own signing keys maybe with the caveat that all data in the protected keystore gets destroyed in the process. Then you have freedom and secure boot in one package. The signing keys are the issue, not the ring -1 management code.

> If you truly believe that physical access is a trump of any security then you can never trust your hardware anyways, as it is exceptionaly hard to prove it conforms to a spec.

Here are some simple steps

  1. compel a manufacturer to create a spy-firmware, signed with their signing key
  2. get access to a device for a few minutes
  3. patch firmware that exfiltrates the data once the device is unlocked by the user
  4. return device to user / to where the user placed it
This is assuming strong encryption keys. If it is only protected by an unlock code your steps look like this.

  1. acquire device
  2. a) compel manufacturer to create a firmware that bypasses "delete on unlock failure" feature
     b) unsolder chips, apply silver needles to flash controllers so you can
        read/restore internal key storage whenever it gets wiped
  3. enumerate all N-digit pass codes until it is unlocked

As you can see hardware security does not save you when the strong keys are on the device and the user only enters a weak key. Similarly hardware security does not save you if a hostile entity got access to your hardware.


I feel like you danced around the central point of my post: there is no suggestions for how to secure and harden devices without refining these trusted computing techniques. We need to harden these devices.

Your argument is no one can be trusted to make them. But my argument is that if you believe that then you know you can't trust anyone to make anything one way or the other.

Surely rather than botch the whole thing (because we don't like the vendors) we should start to propose stronger and more consumer-centric versions of these?

A skeptical and cyncial part of me notes: given the tiny tiny number of users who can actually verify their machines are what they say they are, all "trusted computing" rebukes do is argue against a tech that actually does mitigate rwal attacks for the vast majority of uses.


> there is no suggestions for how to secure and harden devices without refining these trusted computing techniques. We need to harden these devices.

Because the thing you are asking for is not possible. You have a bad premise:

> And I just don't understand how anyone can maintain that farce when the last year has shown that it's a genuine challenge even for the US FBI to unlock a mobile device without the owners say-so and it's getting harder all the time.

Which has two flaws. First, it wasn't a challenge for them, they were just using it as an excuse to whine about the second one that actually is. And second, the only real security is math (encryption), but it doesn't require any special support from the hardware.

If you have full disk encryption with a strong passphrase and the device is currently locked (i.e. the key is not in memory), the only way to get that data is to have the passphrase or break the encryption, and breaking the encryption is not expected to be possible.

The problem is, if someone has physical access to your device and can compromise your firmware, they can record your passphrase the next time you unlock the device, and then they don't need to break the encryption.

But this is not a thing you can do anything about. If someone has physical access to your device they can just steal it and leave you with one that looks the same up to the point of you entering your passphrase and then transmits it to the attacker. Nothing about the original device can fix that because it isn't the original device.

Similarly they can install a surveillance device in your room that can record you entering your passphrase and then come back tomorrow to take your device.

The only answer to these attacks is physical security. Secure boot does nothing.


> Which has two flaws. First, it wasn't a challenge for them, they were just using it as an excuse to whine about the second one that actually is. And second, the only real security is math (encryption), but it doesn't require any special support from the hardware.

They bought a hack from another company for an old model of phone. If the case had been involving the latest model handset, the vendor claimed they had not yet (but were confident they would) hack it.

> The problem is, if someone has physical access to your device and can compromise your firmware, they can record your passphrase the next time you unlock the device, and then they don't need to break the encryption.

And if the device is tamper-resistant? I had this same conversation with a person who hated Yubikey. Nearly exactly the same. It made even less sense, because the entire point of the Yubikey is to be tamper proof.

> The only answer to these attacks is physical security. Secure boot does nothing.

I think maybe what I object to most is that essentially the only attacks considered in this discourse are attacks directly by nation-states at scale. Not only is it clear that handsets and self-built computers are subject to these (at scale), but it's doubly not sure that if you used a TPM then you wouldn't be exposed to attacks against TPM-backdoored devices since your information in an online world is stored on commodity clouds that probably DO have that hardware and if an attack exists, it'll certainly be able to ignore FDE.

Even if we ignore that, TPM mitigates real attacks we see in the real world, and increases the difficulty of those attacks. Average consumers are a case that should be considered in the discourse.

Most people can't effectively harden themselves against nation-state level attacks (if only because incarceration and interrogation exist and even physical security won't stop them), but nation-state level attacks involving a conspiracy amongst manufacturers and the NSA is the justification used to discredit the use of TPMs.

And for the dubious benefit of saying, "Well I have a spec and presumably this board is fully defined by this spec." Of course, truly verifying the board has no back doors is not made substantially easier by the absence of a TPM. So I have trouble believing that this is not an argument among different factions who want final say on a wide variety of consumer hardware.


> Most people can't effectively harden themselves against nation-state level attacks (if only because incarceration and interrogation exist and even physical security won't stop them), but nation-state level attacks involving a conspiracy amongst manufacturers and the NSA is the justification used to discredit the use of TPMs.

That is missing the point, as this is not about the security of an individual against targeted attacks, but about the reliability of our governing structure. In order to harden a democracy against subversion by minorities, it's not necessary for each individual to be able to fend off an army. That does not mean that implanting every citizen with a centrally triggered kill device would be a good idea.

Also, no conspiracy is required: If there is a remote access key, say, that is a single point of failure that no company can defend if a nation state wants to have access to it, even if that may well be their intention.


> They bought a hack from another company for an old model of phone. If the case had been involving the latest model handset, the vendor claimed they had not yet (but were confident they would) hack it.

There is actually an important distinction to make here too.

When you have something like Secure Boot, whose purpose is to make the device trustworthy to enter your passphrase into, it's completely impossible. You don't know if the device you're using is actually the same device, you don't know if someone is watching you, the thing it claims to do is not a thing it can actually accomplish.

But Apple does something separate from that. They have tamper-resistant hardware for storing keys, so that the hardware can store a strong key and enforce a maximum number of guess attempts for a weaker password/PIN.

The disadvantage of this is that it's pure attack surface compared with using a strong passphrase to begin with. If you have a strong passphrase the attacker has to break the encryption. If you have a weak PIN for hardware protecting a stronger key the attacker can break the encryption or break/backdoor the hardware or guess the weak PIN before hitting the maximum number of attempts.

The advantage is of course that it lets you use a PIN instead of a long passphrase, but there is also something else. That hardware doesn't need root. All it needs is to store a key while the device is locked and then spit it back out if you give the right PIN and erase it if you make too many bad attempts. No part of that inherently requires it to be at ring -3. It can be completely independent from all of that.

> And if the device is tamper-resistant?

That's the problem with Secure Boot -- it doesn't matter. Stealing your passphrase by recording it is an attack that can be pulled off by a middle schooler with a nanny cam. It's easier to do that than to backdoor the firmware on a non-tamper-resistant computer, which at least requires you to know what "firmware" is. So what attack are we actually preventing at the cost of having untrusted and potentially vulnerable code at ring -3?


They don't need physical access to compromise your firmware. I thought that was one of the things the original article claimed, at least. The EMs firmware can be updated remotely if the system is plugged in (whether powered on or not).


Yes, but it would clearly be possible to create hardware without that special misfeature.

What isn't possible is to create hardware that can protect you against an attacker with unrestricted physical access.


I was struck by the following passage:

>including Secure Boot, which even now requires FOSS users to purchase a license from Microsoft to boot FOSS on affected machines that lack an appropriate Secure Boot override."

Can someone explain this to me, would this be for instance be Lenovo laptops making a deal with Microsoft since Windows is the default OS installed on these laptops? Is Microsoft mandating all OEMs/hardware vendors to configure secure boot with a MS signing key? Even if I order a laptop with no OS installed?


Secure boot has 4 types of keys:

The signature database (db) and forbidden signature database (dbx) contain a whitelist and blacklist respectivly of keys, signatures, and hashes that are trusted to run.

Updates to either of the above lists must be signed by a Key Exchange Key (KEK). Most implementations allow multiple Key Exchanges Keys.

Updates to the list of Key Exchange Keys must be signed by the Platform Key (PK). Most implementations only allow 1 PK, and that PK is Microsoft's.

This means that any binary run on a secure boot machine with Microsoft's PK has a chain of trust rooted at Microsoft.

It may be possible to update the PK before transitioning the system to secure mode; but most consumer devices ship already in secure mode. This is different from simply disabling secure boot, which would still not allow you to update PK (for obvious reasons).

EDIT: It appears that it is called "user mode" and "setup mode" instead of secure mode.

Also it seems that some systems allow you to re-enter setup mode from the "bios" [0].

On an unrelated note, what do we call the firmware provided settings app now that it is no longer part of the BIOS.

[0] https://wiki.gentoo.org/wiki/Sakaki%27s_EFI_Install_Guide/Co...


Thanks for the detailed answer. In regards to:

>" Most implementations only allow 1 PK, and that PK is Microsoft's."

Isn't this a bit monopolistic and coercive though? "If you want the Microsoft Hologram on your product the PK has to be has to be Microsoft and there can only be one PK." I can't believe this doesn't violate some type of anti-trust laws.


How so? The OEMs want to make hardware that runs Windows. Microsoft provides a specification[0] and certification suites[1] that defines what "Windows-compatible" means, which OEMs then follow. Nothing coercive about that.

There is no open standard that defines what a PC is. Linux and other operating systems are piggybacking on the Windows PC standard. If they want OEMs to manufacture hardware to their standards, they'll have to create their own "Linux-compatible" specification and persuade OEMs to follow it.

[0] https://msdn.microsoft.com/windows/hardware/commercialize/de...

[1] https://msdn.microsoft.com/en-us/library/windows/hardware/dn...


>"The OEMs want to make hardware that runs Windows."

I disagree. I think the OEMs want to make hardware that consumer buy. I don't think they care one bit what OS consumers run on top of their hardware. In fact I would imagine OEMS would prefer to bring their products to market without consulting Microsoft at all.

It's coercive in the sense the the secure execution is predicated on there only being one PK and the OEMs have to knuckle under to MS just to be considered a "potential" machine that Microsoft allows to run Windows.

>"Linux and other operating systems are piggybacking on the Windows PC standard."

What exactly is the "Window PC Standard"? I have never heard this term before. Linux didn't piggy back on Windows anything. Maybe you mean X86? X86 predates Windows.


They also want to make hardware that doesn't get them assasinated by the King's agents.


People are calling the old BIOS "PC BIOS" and UEFI "UEFI BIOS" nowadays. So feel free to continue to call it a BIOS.


Some UEFI implementations even call themselves BIOS :-)


I tend to think the SecureBoot isn't really a grand conspiracy. You can't realistically expect end users to be in charge of keys. So MS does what's best for itself. The problem is with poor implementations by vendors that don't support self signing, and in some cases don't even support disabling secure boot.


>"You can't realistically expect end users to be in charge of keys."

Not but the OEM vendors and not MS should be the owner of the sole PK allowed in the engine. MS doesn't make the hardware yet they are in charge of it. I think that's the issue, it has nothing to do with a grand conspiracy.


Is Microsoft mandating all OEMs/hardware vendors to configure secure boot with a MS signing key?

Basically yes; it's required to get the Windows sticker. I haven't heard that MS charges money to sign bootloaders, though.


I believe that the Windows 10 logo requirements are exactly the opposite of that.

If you look at the UEFI requirements for Windows 10[1], specifically clauses 19 and 20, it says for non-ARM systems the user MUST be able to put Secure Boot into Custom signature-checking mode.

[1] https://msdn.microsoft.com/windows/hardware/commercialize/de...


They're not opposites. PCs are required to have secure boot and they're required to have MS's cert installed and they're required to be able to disable secure boot.


This was true in Windows 8 times, but with Windows 10 the requirement to be able to turn off Secure Boot vanished: https://arstechnica.com/information-technology/2015/03/windo...

The whole story around Secure Boot could be understood (even without a tinfoil hat) as a part of a slippery slope to lock out alternative OSes, highly recommended post: https://www.phoronix.com/forums/forum/phoronix/general-discu...


Thanks, I have to wonder how much of bureaucratic headache that is to get your bootloader signed.


James Bottomley navigated the bureaucracy and lived to tell about it: https://blog.hansenpartnership.com/adventures-in-microsoft-u...


Oh interesting, thanks for the link.


> now requires FOSS users to purchase a license from Microsoft to boot FOSS

This isn't actually true, is it?


They charge a fee to get a certificate that you can use to submit binaries to be signed with the Microsoft signing service. If you're a user of a distribution you won't have this problem, because distributions have already paid and have a hack "shim" which effectively enrols the distribution's own keys into the trustdb. But if you wanted to make your own distribution you would have to pay.


Nope. It should be normally possible to disable Secure Boot.

In fact, many distributions don't support Secure Boot at all.


> Nope. It should be normally possible to disable Secure Boot.

This is only true on x86. On arm, Microsoft's requirements state that it should not be possible for users to disable Secure Boot.


But is there any interesting ARM hardware that is certified for Windows? (Honest question, I have no idea.)


For Windows RT, looks like there wasn't that many certified devices.[1]

[1]: https://en.wikipedia.org/wiki/Windows_RT#Devices


This needs more attention. Particularly now that AMD may actually look into cooperating with the community on this matter somewhat. I wouldn't get my hopes up yet though, as this was a Reddit AMA done during a time when AMD is keen to please the community. This matter must not go away for something to be done about it.


Are there any projects out there that throw the baby out with the bathwater and just restart computing from the ground up with freedom as a foundation? I'd love to participate something like that and I think it'd be a great way to respark 80's like hacker movement.


Libreboot and coreboot are trying to open source things on the software side of things (think dd-wrt or openwrt or tomato for routers, custom firmware basically). With hardware it's a bit of a different story. You hear about attempts from time to time, but getting away from Intel / AMD is really hard. The suggestions from the article about alternative architectures seem to be our best bet currently.


Alternative architectures is definitely the most pragmatic thing to go for. I was going off on a bit of a tangent from the article and just wondering if anyone has tried redoing the 70's - 90's without trying to be compatible with any existing technology but still learning from the mistakes.


The crowdfunded Open-V implementation of RISC V comes to mind.

https://www.crowdsupply.com/onchip/open-v

However, it doesn't look like they're going to come remotely close to hitting their funding goal. Fabricating chips is expensive.


Pure speculation: Any entity that obtains the (financial, other) resources necessary to help facilitate such fundamental subversion is eventually convinced that the status quo is necessary to our survival.


Their attempts will be just that. There's a SHA256 signature required to verify the ME code - no signature, no boot (or boot for 30 mins less commonly). They won't share the keys with just anyone.


It's sha256?

I wonder if a Bitcoin shared mining setup could co-op some of those hashes to brute force the keys.


Here's the context, for anyone who didn't see the AMA:

https://www.reddit.com/r/Amd/comments/5x4hxu/we_are_amd_crea...

This is currently the top-voted question with almost four thousand upvotes. AMD gave a noncommittal "we'll look into it" response, but now at least they're aware that a lot of people actually do care about things like this.


RISC-V comes to mind as an open and free instruction set. The step to an actual implemented-in-silicon processor is pretty big though, and for any such chip it's hard to verify that it really is as free as it's claimed to be. You can't diassemble your CPU (or perhaps you can, but for very few values of "you").


Yeah, basically you have to have a bunch made and destructively analyze some of them (https://www.extremetech.com/extreme/141077-how-to-crack-open...).

That'll build confidence that the others aren't compromised.

The costs could be reasonable (http://electronics.stackexchange.com/questions/7042/how-much...) under a kickstarter-style campaign.


I'm trying to understand all of this and especially the threats to privacy, control of my computing hardware, and data security.

I read about some new hard/software for secure boot, etc., but don't recall all the details now.

So, for a shorter approach, suppose I just buy a processor from AMD, a motherboard from ASUS, hard disk drives from Western Digital, etc., and plug it all together for myself. So, then I'm the manufacturer or OEM of my computer.

Q. 1. For what the OP is talking about, where do I have threats to privacy, control of my machine and its data, and security?

Q. 2. To use the machine I plugged together, do I have to get some keys from Microsoft?

Q. 3. Suppose I install operating systems from Microsoft, e.g., Windows 7 64 bit Professional, Windows 10, Windows Server or the database SQL Server. Then do I have to get keys from Microsoft?

Q. 4. Will the support processor and its software, whatever they are called, on their own without my knowledge or approval use the Internet to send/receive data from/to my computer, modify the data on my hard disks, etc.?

Thanks.


1. The motherboad has a ARC chip, that loads a firmware included in the flash chip. That ARC chip is supposedly inside the PCH (Platform Controller Hub, a north bridge on steroids) - it's efectively in the silicon, you can't remove it.

2. Depends on the motherboard and the BIOS written in the flash chip

3. No. They are already signed.

4. If somebody controls them and ask them to do so. All that's necesary is a LAN connection (or wifi, but only with Intel chips) and power. The HDD is completely irrelevant, as is the OS.


The motherboard I have in mind is the ASUS m5a97 r2.0. Back in October, 2015 I got a PDF on that motherboard at

http://data.manualslib.com/pdf2/42/4150/414970-asus/m5a97_r2...

Just checking, the PDF does mention the Unified Extensible Firmware Interface (UEFI) but not ARC or PCH.

That ASUS manual does mention that the UEFI BIOS does offer automatic updating of the BIOS version; that feature, if enabled, does seem to raise security concerns.

Looking at the UEFI page of Wikipedia at

https://en.wikipedia.org/wiki/Unified_Extensible_Firmware_In...

there is

> UEFI can support remote diagnostics and repair of computers, even with no operating system installed.[3]

which seems to raise some security concerns. Also it does appear that some people trying to install an operating system might encounter some mud wrestling. Maybe what I'm intending to do with Microsoft's Windows 7, 10, and Server will be easy enough.

Thanks.


Do you still work at Fedex in Memphis? If so, we're in same vicinity. Maybe we have lunch some time and Ill tell you all about the various subversions going on. :)


No, I'm not in Memphis or with FedEx but am in NYS and doing a startup.


Oh ok. Nvm. Guess Ill keep doing it a bit at a time here then. Others covered this topic well, though.


Want complete software freedom? How about the MIPS chips the Russian military uses[1]? Those don't have an NSA back door. Sucks you can't really buy them as they are only made for use in Russian military and government applications.

"Last year, the Russian government announced that it doesn't want to rely on Intel and AMD chips from the U.S. anymore and will focus more on using homegrown chips from Russia."

[1] http://www.tomshardware.com/news/baikal-t1-mips-cpu-omnishie...


Well, the actual CPU cores are designed by Imagination Technologies, which are based in UK. I would not be so sure that they are fully safe.

You can make an argument that it is still an improvement, because there are no obvious binary blobs required by the system. But in this case, I would recommend going for one of the many Chinese ARM cores -- at least you can buy them easily.


There are also the Elbrus processors which I believe are entirely designed in Russia, although some models are manufactured by TSMC (and some are manufactured in Russia).

The early models implemented a proprietary VLIW architecture with enterprise-y features (like hardware-tagged pointers, probably borrowed-ish from Itanium) with a dynamic binary translation layer for x86 compatibility on top, not sure if they still do that or perhaps the other way around now.


Those probably have Russian backdoors though.


In most cases, a foreign gov spying on you is harmless.


In most cases a foreign government spying on you is trading their insights with your government in order to sidestep domestic spying restrictions.


What are the chances that the access pass from the foreign government to actors you don't want spying on you: criminals or your own government's intel agencies?


>Both serve effectively the same purpose; to ensure that the physical owner of the machine never has full control of said machine.

That is the end-result, yes, but that wasn't the purpose: the purpose was to allow companies to keep track of their laptops--to remotely push out firmware updates, to inventory the hardware/asset list, etc. It was a convenience feature, essentially.

Of course, the end-result, as stated, is that you've got a complete black-box second processor that can do whatever it wants, even when your device is off.


Nah, that wasn't the purpose unless Im misremembering. It came later as a selling point. It started in initiatives such as Trusted Computing Group where these companies agreed on technologies to control what runs on the PC for security and DRM purposes. Featured regular, secret meetings on top of the public ones.They took lots of flak when DRM goals got press attention. That they're just helping companies manage stuff or users fix computers sounds much more pleasant. They also made sure it was true by adding those features. ;)


>As Intel owns all rights to the x86 architecture, there will never be any new manufacturers licensed to make x86 chips ...

This strikes me as the root problem here. How can one company be granted a monopoly on what is basically an instruction set? Particularly in the case of the instruction set our civilization runs on?

Since I can't understand how something like this could happen I don't understand why any replacement architecture wouldn't end up being controlled by a single entity.


The legality of x86's ISA being copyrighted/patented aside, the x86 ISA isn't that attractive for any new endeavours. It's a very complex ISA which is incredibly difficult to decode. Internally, it is anyway converted to RISC style microops. So as a competitor, you don't really gain a whole lot by implementing the x86 ISA apart from the ability to run software without writing a new compiler. What's more important is the microarchitecture.


I believe it's not the instruction set they have patents on per se, but rather on how to implement it efficiently.


Which provides strong evidence that patents are the root of all evil.


> These technologies, in turn, are used to implement various forms of remote control and Digital Rights Management (DRM) technologies, including Secure Boot, which even now requires FOSS users to purchase a license from Microsoft to boot FOSS on affected machines that lack an appropriate Secure Boot override.

I dislike the mandatory use of these features as much as the next nerd, but this is inaccurate FUD. Secure Boot is a code in flash that checks the signature of whatever you try to boot against some rather complicated policy. It's regular code and would work more or less the same on any platform that runs machine code off of ROM or flash.

There's something that Intel calls, IIRC, "Verified Boot" that tries to prevent someone with an in-system programmer or desoldering skills from changing the flash, but that has nothing to do with the Management Engine either.

And FOSS users don't need to purchase any license from anyone. They can use a tool like Linux Foundation's PreLoader or Red Hat's shim (open source but awkward to modify because you need the signed binary to boot on a stock system) to boot anything they like. No negotiations, no license, no communication with MS at all.


> > These technologies, in turn, are used to implement various forms of remote control and Digital Rights Management (DRM) technologies, including Secure Boot, which even now requires FOSS users to purchase a license from Microsoft to boot FOSS on affected machines that lack an appropriate Secure Boot override.

> I dislike the mandatory use of these features as much as the next nerd, but this is inaccurate FUD. Secure Boot is a code in flash that checks the signature of whatever you try to boot against some rather complicated policy. It's regular code and would work more or less the same on any platform that runs machine code off of ROM or flash.

"Regular code" doesn't mean it's not proprietary, and doesn't mean that it's not concerning for free software users.

> And FOSS users don't need to purchase any license from anyone. They can use a tool like Linux Foundation's PreLoader or Red Hat's shim (open source but awkward to modify because you need the signed binary to boot on a stock system) to boot anything they like. No negotiations, no license, no communication with MS at all.

Those preloaders are signed by Microsoft. While it is a good hack for distributions at the moment, it doesn't mean that Microsoft is no longer in the loop. They still have an incredibly worrying amount of control over what can run on modern hardware.


> "Regular code" doesn't mean it's not proprietary, and doesn't mean that it's not concerning for free software users.

Which has essentially nothing to do with the article and isn't even Intel's fault in any meaningful sense.


Would love to see and ARM or MIPS setup get within shouting range of Intel.

I have yet to hear any explanation of the IME that makes sense without the presence user-hostile intent.


> I have yet to hear any explanation of the IME that makes sense without the presence user-hostile intent.

The entirety of enterprise laptop management. Not because you don't want users to change their laptop. The point is to be able to run updates for the users.

Or consider the remote KVM option. Disregarding security, that is a sysadmin's wet dream. Being able to recover a system that can't boot saves a lot of boots on the ground.


I get all that - I've worked in enterprise IT my entire career.

It does not explain the 30 minute timer.


> It does not explain the 30 minute timer.

An innocent use would be: "If the ME is hung, turn it off and on again."


Sigh. If I need to spell this out:

Why is the ME watchdog mandatory?

What innocent explanation details why Intel has chosen to deny me the option to consider the ME a security risk in my environment and disable it?


Even if the CPU running Windows is bluescreened, you can still pull up a KVM remotely. Demonstrates the power that AMT possesses for good or ill.


Consumer ARM-based platforms in the market right now are even more locked down and spy-hook-infected than any x86 laptop. The baseband CPU on most phones is a far more capable device than the microcontrollers used for this on big core machines, and they routinely have access to all of DRAM without the possibility of interception by the application core.


You need a PKI infrastructure to implement AMT/IME.

I used it a few years ago to automatically build classroom computers for training classes. The trainer would pick a configuration, and a complete server and workstation environment would be installed.

You can also do KVM from a powered down state, brick the device, or validate that management engines are present.


Management of "corporate" computers. Installing Windows on hundreds of machines at once, remotely wiping stolend laptops with corporate secrets. But the cancer has spread to all computers, regardless of domain of activity.


Is "uncorrectable" still a valid description? There seems to have been some progress lately in eliminating ME that are not necessary?

https://news.ycombinator.com/item?id=13056997

https://news.ycombinator.com/item?id=13416378


> Major distributions have worked around this issue by purchasing a signing key from Microsoft for their binary packages, but the end user is unable to modify the signed software without a license from Microsoft, even though they have the source code available to them under the GPL.

Is this an accurate description of what is happening? (I don't pay much attention to desktop systems: I spend most of my time concentrating on the ever-worsening mobile arena.) Do these "major distributions" come with a recent version of bash? As someone who develops software under the GPLv3 license, I would not want my software being distributed to these machines via this hack :/.


> > Major distributions have worked around this issue by purchasing a signing key from Microsoft for their binary packages, but the end user is unable to modify the signed software without a license from Microsoft, even though they have the source code available to them under the GPL.

> Is this an accurate description of what is happening? (I don't pay much attention to desktop systems: I spend most of my time concentrating on the ever-worsening mobile arena.) Do these "major distributions" come with a recent version of bash? As someone who develops software under the GPLv3 license, I would not want my software being distributed to these machines via this hack :/.

It's not entirely accurate. Effectively what most modern distributions do is that they have a "shim" which is signed by Microsoft. That shim then enrols the distribution's own UEFI keys on the laptop. So their kernel is signed with both their own key and Microsoft's key. This means that you can modify your code without "permission" from Microsoft. openSUSE, Fedora and Debian all employ this tactic so that our distributions can boot on newer laptops.

Do I wish this wasn't necessary and that everything ran core boot? Yes. Is there a better way of handling this problem? Not as far as I know.


I don't think GPLv3 anti-DRM clauses would kick in here. They would if 1) someone was distributing a combination of a Secure Boot-only machine with your software installed on it, and 2) the loader (which is the only thing really validated by Secure Boot) would actually try to continue the chain of trust, and validate all the other bits, such that the user cannot run a modified version of your software.

I'm pretty sure that neither of those is the case, however. Once the system boots using the signed loader, it's really just Linux, and you're free to replace the kernel and any bit of userspace as usual.

Furthermore, I seriously doubt that anyone is selling machines with Linux preinstalled that have Secure Boot which cannot be turned off - simply because that would become known pretty fast, and even aside from licensing issues, would elicit a very hostile reaction from the community (and hence many potential buyers).


I'm not 100% sure that this is what's being referred to, but in Ubuntu for example, the initial bootloader is a program called "shim". Microsoft signed shim - so computers with secureboot will run it.

Shim contains keys from Canonical or someone (I'm not sure), and verifies that GRUB has been signed by Canonical before running it. Then when GRUB runs the kernel, it calls back into shim first to verify the kernel has also been signed (Actually that last step wasn't enabled yet last time I checked).

So basically, until Microsoft changes their keys, by signing shim they've given Canonical permission to sign things. But unless you disable secureboot, you can't run a custom kernel unless you convince Canonical or Microsoft to sign it.


How is this not an anti-trust issue?

Of course the government wants this capability to access anyone's system, so I assume nothing will be done. This has to be one of the worst things that has happened in the history of computing.

EDIT: Handy for CBP use, I imagine.


This has nothing to do with anti trust. Selling hardware doesn't require you provide software or the ability to run your desired software.


The article states the following for RISCV:

>"While this architecture is extremely limited in performance, price"

Can anyone say thy the performance of RISCV is so lacking?


Major manufacturers -- like Intel, AMD or ARM -- have spent years upons years developing methodologies, technologies, micro-architectural optimizations, and testing+development+evaluation infrastructure. This is the key for understanding how to navigate and ensure good performance across the many different trade-off and optimization domains in performant-processor design.

This is not easy to acquire or build. Some is wisdom from decades of design and iteration. Some is hundreds of thousands of engineering hours. Some is big money.

RISC V is a new open source project. Who knows how close they will ever get.


I suppose it's true for currently available chips that implement the RISC-V ISA.

Nothing says the ISA itself is a barrier to performance on par with popular existing processors though. The RISC-V BOOM implementation is supposed to be close to an ARM Cortex A9 in performance.


> Can anyone say thy the performance of RISCV is so lacking?

"Because no one has manufactured a high performance RISC-V implementation yet" isn't answer enough for you? All that exists for purchase at the moment are microcontrollers aimed at the ARM Cortex-M market niche.

There's nothing about the ISA that says you couldn't make a deeply pipelined, six-way issue implementation with three levels of cache running at 4 GHz. But that fact doesn't make such a machine appear from nothing either.


Many good answers already, but I want to mention RISC-V Boom (superscalar, out-of-order), which seems to, in theory, beat ARM Cortex A9 quite nicely:

https://www.youtube.com/watch?v=HVPSlS2v1F0

The whole toolchain they are working with looks quite awesome tbh.

Commercially, the Freedom U500 platform seems to be really interesting:

https://dev.sifive.com/documentation/freedom-u500-platform-b...


Nobody has invested the money and work to make it happen.

You can take RISC-V ISA specification or ready Verilog/VHDL and produce working ASIC chip relatively easy, but it's not fast.

If you want fast general purpose chip in that competes toe-to-toe with AMD and INTEL, it will take huge amount of chip and physical design, simulation and verification. AMD has spent tens of millions to get new x86 ISA compatible architecture out. Doing the same for RISC-V without enough demand would be economic suicide.


Likely because there's no major consumer devices shipping them (or at least high-end versions of them) that could help them hit a scale that brings the cost down, which makes them less viable for a general public consumer standpoint, which means there's less time spent optimizing it.

I remember a wihle back when Google was shopping around for Intel replacements (likely a negotiation tactic), people were saying they should buy the POWER division from IBM (IIRC). That would have been really interesting...


>"I remember a wihle back when Google was shopping around for Intel replacements (likely a negotiation tactic), people were saying they should buy the POWER division from IBM (IIRC). "

Funny, I was speaking to some IBM engineers a few months ago and brought up the POWER chips and they kind of laughed and said something to effect that biggest use case for POWER was Google as a means of keeping Intel pricing in check.


Am I missing something here?

> Secure Boot, which even now requires FOSS users to purchase a license from Microsoft to boot FOSS on affected machines that lack an appropriate Secure Boot override.

I recently installed rEFInd from source by using a self-singed certificate (signed the binary using it and enrolled the key into the EFI using mokutil) and it worked. I certainly didn't have to pay MS. I do know that rEFInd provides a key of their own (using the distro's shim) that obviously has trust rooted at MS.


Good to see mention of Talos, even if it never came to fruition.


Mentioned back in 2014 iirc?


(Article is from 2016.)


Even then, he's a little late to the party.


Is there evidence these have been used to harm anyone?

Not that I wouldn't like a world with no more blobs (or at least reproducible-build signed blobs). But I use a ton of software I don't have time to review. Why is solving this more important than, say, looking for RPC holes in docker?




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: