It absolutely could. The alternative is that Microsoft has complete control over what software can run on any Windows PC. It turns out that people who want this ("Man. I wish I could only run programs chosen by the monopoly supplier") already have an iPad so this basically just creates a backlash.
I know what you're thinking "Oh, well there could be an exception for when you need it, you'd just use admin to authorize it or something" and that's exactly what this is.
> The alternative is that Microsoft has complete control over what software can run on any Windows PC.
That would be the relatively little known (and new) Windows 10 S, where only apps from the Windows Store can be installed or run. Designed for security (?) and to compete with Chromebooks.
But the desirability is proportional to the level of moderation of the store and Microsoft's is complete shit: it's full of not-quite-malware and not-quite-scams.
It turns out that people who want this ("Man. I wish I could only run programs chosen by the monopoly supplier")
Or people don't know/care. Or weighed the comparative downsides of a controlled app platform versus the wild west, and decided the controlled platform is less of a downside for what they want to do. Or lots of options, really.
Well, the same is true of any osx pre high sierra. A malware can disguise as installer utility and it can make modifications to system or install kexts after user okays it (keychain saves the password).
Linux makes you type in the password manually every time for elevation but that will just make the average consumer remove or use unsafe passwords for their accounts.
Sure you can customize /etc/sudoers to your heart's content but that's not the point here. (btw, you can use group policy to enforce password at UAC prompt and limit users from system wide changes to Windows).
The point is about a secure & safe way for a common person to authorize an application that wants to make changes to the system and as the defaults stand, sudo prompting for password at every elevation attempt is worse off in my opinion.
Driver signing is not really about malware, but rather about Microsoft trying to enforce a bit of quality control on the shit-show that was driver development by noname hardware manufacturers. If a driver is signed I can look up the owner and bash him for his crap quality code. Similarly, any manufacturer caught doing this sort of trick would be blacklisted.
This seems a better and more secure solution than buying an actual certificate.
If they had a single certificate and used that across devices then the private key could be compromised and used to authenticate malware or pull off a man in the middle attack on an HTTPS site since your system now trusts this new CA. By generating the private key locally and throwing it away, you can be relatively confident that no one has the private key to this new root CA.
What's silly in the first place is that something not trusted to install unsigned drivers still has the perms to install a new root CA but given that constraint, this is a better solution than what Savitech did.
I wouldn't trust anything closed source to forget that private key.
I can't see how buying an actual cert could be more risky than installing a new root CA. The goal of signing is to ensure origin and anti-tampering: two fails in this case. So now you may have a tampered with driver that doesn't remove the private key and uses the new CA to inspect your TLS traffic, and you wouldn't know.
> If they had an actual root CA with a private key, that private key could be compromised and used to authenticate malware or pull of a man in the middle attack on an HTTPS site.
If they had an actual root CA with a private key, they'd sign it locally (on the company machine). In no scenario would the company's private key be given to a customer (unless we're talking about Adobe).
1) Trust some company to keep a very important private key secure for a long time? (with attackers knowing it's a single high-value target)
2) Or be confident that the private key was used once and destroyed forever? Even if the private key generated on your device could be recovered it would only be good for an attack against you making it a lower priority to attackers.
Or be confident that the private key was used once and destroyed forever? Even if the private key generated on your device could be recovered it would only be good for an attack against you making it a lower priority to attackers.
Doing it that way completely undermines the reason for having a cert in the first place. You might as well not have one at all.
The difference is that with the on the fly cert, you blindly trust one piece of code, at one point in time, and when it did not lie to you then you will be safe from it later. A conventional cert owner on the other hand could theoretically turn on you any time (e.g. when ownership multiplies into pwnership) once "automatic trust" for the next binary is established.
I'd still prefer the latter, given reasonable standards in terms of key handling, but the one-time trust is not completely without merit. It would certainly be more reasonable though to just allow one-time blind trust without forcing the installer to create a certificate that may or may not be as private as advertised.
There's a difference. With auto-generated root certs you can't just steal one private key, sign your malware with it and push it to all users of the original software.
It is! If nefarius and the other ScpToolkit guys are listening (a popular toolkit for Dualshock 3 and 4 controllers on Windows, see https://github.com/nefarius/ScpToolkit): this might also be an appropriate way to make their self-signed drivers work again on the latest Windows 10 update.
I think that it will allow you to install the drivers, but have an extra red pop-up with a warning about unsigned drivers.
To get a clean, professional-looking installation, you've got to have a primary signature that chains down to a trusted root CA and also a cross-signature, which is a Microsoft cert used to sign the code's root CA's certificate.
I'm not clear on the difference between a driver and a kernel module in the Windows world. As far as I could tell, we were talking about installing drivers, and the "Signature requirements for it to just work" seem to list just a SHA-2 cert from a trusted root CA for installing drivers, even for Windows 10.
Maybe there's something I'm misunderstanding here. I've set up Windows codesigning, but it was according to the specifications of the Windows devs; most of my own work has been in Linux.
Yeah I can see why you're confused about the terminology. I'm not aware of anything called a "kernel module" in the Windows world (despite the name on that page). They're all just called "drivers". If they're for devices, they're called "device drivers"; if they're for filtering the file system, they're called "file system filter drivers", etc. It seems like on that page they're referring to the actual binary as the "kernel module" and to the overall package as the "driver package" (not all of which is executable kernel-mode code).
The distinction being made across the columns is between installing the driver (i.e. putting it in the right directory and setting up the settings and everything so that it can be loaded) versus actually loading the driver (i.e. telling the kernel to execute the code immediately). They require different permissions. You need to satisfy both for your driver to run, and you can see that MCVR is a requirement for loading a driver on newer Windows versions, i.e. you need trust from Microsoft, not just the user.
Now as some people are pointing out, Microsoft also has a user-mode driver framework which doesn't seem to have the requirements of the kernel module. (On the other hand, it exposes more limited functionality.) So if you're writing a user-mode driver then you might not need trust from Microsoft. But that's not what I generally mean when I say "driver"... to me "driver" implies kernel-mode, or at least the union of the
two. It certainly doesn't just refer to the user-mode kind.
> I'm not aware of anything called a "kernel module" in the Windows world
I ran into that last night. It was implied in the MS documentation that all kernel-mode drivers in Windows were "loadable kernel modules".
Anyhow, thank you for the clarification. That's kind of the direction I was thinking, but it's nice to see a more concrete description.
I'd also suspected that there was a distinction similar to "kernel-mode driver" and "user-mode driver", but all that I saw in Microsoft's documentation when I was looking last night were the descriptions of the differences between bus, device, and filter drivers.
Reading with a clearer head now, I found some more clarifying material. VxD was the earliest Windows driver model, supplanted close to 20 years ago by WDM. On top of that is the WDF, which sounds like it was first introduced just after WDM, and complementing it. And that's the one that has a separate Kernel-Mode Driver Framework and User-Mode Driver Framework.
From my background, "driver" always implied kernel-mode, unless specifically specified. I mean, I can write a user-mode driver on a Raspberry Pi (or what have you) to communicate with external hardware over the IO pins, but it's a clearly different process than writing a kernel module. For example, I could write user-mode driver code in Python, and don't have to worry about the internal workings of the kernel.
> From my background, "driver" always implied kernel-mode, unless specifically specified.
Right, so it seems correct to say that you cannot load a driver using just a custom root certificate. You need it trusted by a root certificate from Microsoft, which (assuming that is correct) means much of this thread is wrong.
- generate a fake CA and use it to sign your driver on the fly;
- add the generated root CA to the trusted list
- delete the private key so that nobody else can sign anything with this CA
- now windows will happily consider this driver as worthy of trust and install it.