In all honesty, I'm waiting for a "Trusted Computing Chip" to have a key that can be set and reset.
I'd like to be able to authortatively say that a server I colo/rent is indeed mine. And given Trusted computing in a different context, I could guarantee that the machine is running my code.
And turned on to protect the file system, I could guarantee that the computer is a bastion, and that no foreign code I didn't approve is running. Then, turning on SELinux also provides another layer, where syscalls are prevented depending on RBAC level.
I could certainly see a whole lot of nice high security machines made available if AMD went the open route about this... But alas, these things usually only talk about Security from the user, to Media companies. meh, is all I can say.
> In all honesty, I'm waiting for a "Trusted Computing Chip" to have a key that can be set and reset.
The "trusted" in "trusted computing" is used as it is defined in IT security (but not in the sense how an "ordinary user" understands it).
In this case "trusted" means that one can trust the computation since the user has no option to change it/have control over it. Because if they had, we could not trust the result of the computation since the user might have modified something etc.
In other words: If the user has such a freedom, we cannot trust the computation, but we have to trust the user.
Thus "trusted computing" that gives freedom to the machine owner is impossible.
If you buy a PC or laptop this is often not what you want (at least for the HN audience). IOW: Here in most cases you don't want trusted computing.
On the other hand, if you rent a server, you would love to have the trust that the data center provider cannot intercept/modify etc. what you do on the server. For this situation you consider the machine owner (datacenter operator) as potentially malicious and want technology where you can trust the computation.
And I would accuse you of thinking in the narrow line of "TPM for media companies to control your hardware". There are other ways to go about this..
Think of the ORWL. It's a heavy crypto system that will delete its keys if it's tampered with (heat, cold, shock, pressure, penetration). If I hand these keys to someone else, they have control over the software installed. If they lose the keys, they're out of luck. If the device thinks there's a tampering, they're out of luck.
Now, this may be installed in my datacenter. Awesome. If I mess with it, it's easy to detect. Because everything auto-deletes on tamper. So it is my device by possession, but someone else controls it as I choose for the duration they have the keys.
This would be a great way to run secure systems, as I mentioned further up. Yes, I can run it in a datacenter, but I don't have to trust the datacenter. It's not impenetrable, but it's a heck of a lot better than other solutions.
(Yes, I'm still waiting for reasonably fast homomorphic encryption and processing. But I know that'll be a long time away. Even if Signal is doing some work on that area, the processing side is still about 7000x too slow.)
I don't get it. This would not increase trust since we can not check if the source actually corresponds to the binary in the processor. It would need to be repeatably build-able with the result matching the signature of the default blob.
They could release a signed NO-OP firmware that is so small it would be easy to verify as benign using a disassembler. Potential of more secret code is always there, but being this malicious would be highly damaging to the company.
Anyway, surely "This would not increase trust" is a bit of an overly strong statement. Even if AMD didn't release reproducible build instructions, having the source code would still make it significantly easier to detect AMD shipping different (compiled) code.
But yeah, open sourcing the PSP without reproducible builds would be a little silly.
Releasing the PSP source code would AFAIK not change a thing. While it might be an interesting read, the PSP (i.e. AMD) would still remain in control of the platform. The PSP does NOT rely on code obfuscation. It is a much deeper architectural problem with he PSP, than the fact that it holds a binary blob.
The designer of the Intel Management Engine (the mother of the PSP) Xiaoyu Ruan wrote in his book "Platform Embedded Security Technology Revealed" in chapter 4:
"By design, the firmware binary should not contain secrets, and hence it is not encrypted or obfuscated in any form. Note that lossless compression may be applied to the code. The firmware binary, in its compression form, is stored on SPI flash in cleartext. At runtime, the code segment is not encrypted when it is paged out to DRAM. Admittedly, advanced hackers have successfully reverse-engineered and disassembled the engine’s firmware binary. However, knowledge of source code is not deemed a harmful threat, because no secrets or keys are ever hardcoded in the code, and the architecture and robustness of the engine does not rely on security through obscurity."
Further in chapter 11:
"Hardware root of trust: Binary code and the data of firmware components are stored in the flash memory in the clear. Encryption is not used because the security architecture does not rely on security through obscurity. The concept of hardware root of trust contains two folds: first, the root of trust for integrity is a hardware ROM (read-only memory). Unlike the firmware in the flash memory, the binary of ROM by design is not available externally. Although, even if the code of ROM is leaked, the security of the engine should not be impacted; second, the EPID (enhanced privacy identification; see Chapter 5 for details) private key and other chipset keys are burned into the engine’s security fuse block in Intel’s factory. These keys comprise the root of trust for confidentiality and privacy for the engine."
So IMHO AMD should be asked to produce a chip without the PSP. Or offer the possibility to disable it. As long as there is a PSP on the system it cannot be fully controlled by the user, even if it's source code should be known. It is a small autonomous computer with it's own CPU, RAM, ROM, clock etc that has fully privileged access to the systems components and can load and run code anytime (so it can run other code than the firmware source code).
Nevertheless I do see that there is a good side to these kinds of petitions and requests. AMD feels that there is interest in the subject.
>As long as there is a PSP on the system it cannot be fully controlled by the user, even if it's source code should be known.
Why can't you have self-signed PSP/ME firmware ? Why is this not similar to the way android handles bootloaders ? i.e., You either have OEM's keys in the chain, or you have your own. If your employer owns the machine, they do whatever they want. If you own the machine, you do whatever you want.
Why is it an illusion if you have the source ? You get the source, build it, sign it, and flash it. You can't verify that CPU will only accept your signed firmware and not something else under special circumstances, but you can't verify those sort of cases in hardware anyway. The CPU is a black box at the end of the day.
Edit: Disabling becomes a special case of flashing in this case. Actual hardware disabling will likely never happen because of all the people who use the functionality. The best you can hope for is flashing whatever you want (including nops)
There's nothing stopping a PSP/IME from waking up on a magic packet, executing code that doesn't exists in flash (say, it is cleverly crafted into what looks like dummy transistors used to ensure an equal metallization layer), downloading new firmware from a C&C server, and re-flashing itself.
If you want to actually own the hardware, you'd probably need a custom chip layout all done by hand on a planar node so it was easily verifiable by a third party with relatively inexpensive tools; things like dopant-level attacks assume access to the machine physically as far as I know.
If you assume that a fab could be hostile, you'd have to zero the foundry attack surface.
This hypothetical chip would be dog slow and uncompetitive, but by god, it'd be yours.
A company like AMD isn't going to be concerned with customer privacy and security unless it affects their bottom line. If the market was hostile to closed drivers and processor firmware no one would bother selling these types of systems because it wouldn't make sense financially.
The consumers have spoken: they're OK with trusting computer manufactures, software vendors, telecommunications carriers and governments. To me and perhaps other people interested in security that's unfortunate but to device manufactures, they're simply giving the market want it wants.
The customers were given no choice. Either upgrade to new hardware with closed blobs or fall behind your competitors.
AMD is in the unique position of having price competitive hardware (per core anyway) in a market where they have effectively zero market share. Big players are going to be switching anyway. If there's no technical reason not to open up the boot process then why not tack on the value add? A few million USD in engineer hours and in a few generations the server market might flip back towards the good guys.
I can see a situation where an entity demands having full hardware trust and even if Intel outperforms, the trust is worth it.
Even if AMD doesn't, and I wouldn't hold my breath, we'll likely see a smaller market emerge for open chips by way of RISC-V and the like.
The consumers have spoken: they're OK with trusting computer manufactures, software vendors, telecommunications carriers and governments.
Please be careful with statements like this! With the wrong speaker, they can justify all kinds of evils.
I'd be willing to wager some high 90s percent of people buying electronics don't even consider privacy at the hardware level because they don't even have the proper knowledge to have the mental state to ask those questions in the first place.
That does not mean that consumers are "OK" with it, any more than consumers care about dihydrogen monoxide in their food until they're told about it.
Have they released an open source implementation of their Vulkan driver, or did I dream that? And didn't Nvidia promise something similar? Has anything come out of those promises?
No, they never released it. They said several times, that they are working on opening it though. They already replaced their previous shader compiler with llvm toolchain for example, but there is more work than that.
Meanwhile, radv is advancing pretty well. AMD never clarified what their strategy is in regards to radv. I.e. whether they plan to compliment it with their Vulkan driver, or they intend to make a second open one.
> It is kinda silly that ECC support is something you have to pay extra for,
As far as I know for Ryzen processors it is available in every processor.
> Why not just leave it out and use it as an up-sell for corporate users?
One trivial argument might be cost (indeed I consider it as plausible that it is cheaper to just produce one version of the chip).
But I also consider it as plausible that there is something deeper behind: Intel vPro (of which AMT is a part) is also used for ant theft (cf. [1]). Don't ask me about details how this is implemented - at least to my knowledge information about details are not completely trivial to find and Intel is rather secretive about anything that involves security. Now if only some Intel processors implemented Intel ME, Malory ([2]) could perhaps simply replace the processor by a one that does not implement Intel ME to circumvent the Anti Theft Technology (or at least this is something Intel fears). I consider it as rather plausible that something related to this might also be a reason why Intel does not sell processors without Intel ME (but consider this paragraph as speculative).
I'd like to be able to authortatively say that a server I colo/rent is indeed mine. And given Trusted computing in a different context, I could guarantee that the machine is running my code.
And turned on to protect the file system, I could guarantee that the computer is a bastion, and that no foreign code I didn't approve is running. Then, turning on SELinux also provides another layer, where syscalls are prevented depending on RBAC level.
I could certainly see a whole lot of nice high security machines made available if AMD went the open route about this... But alas, these things usually only talk about Security from the user, to Media companies. meh, is all I can say.