The problem isn't that Windows requires drivers to be signed. The problem is that Windows allows drivers to have an expiration date. If Windows verifies a driver's signature at the time the driver is installed, the driver should be considered trustworthy for as long as it remains installed on that system. There's no reason to re-verify the signature every time the driver is used.
Recommended practice is to timestamp windows drivers (and software) when they are signed. Without a timestamp, the driver is not trusted after the signing cert expires, which I guess is what happened here.
With a timestamp, as long as the signing date was within the signing cert's validity period, the signed driver continues to be trusted beyond the signing certificate expiration.
That seems silly. Presumably a cert has an expiration date after which we might assume its been compromised. If it has been compromised then it could have been used to backdate a driver signed with it. In other words, if you don't trust the cert you should not trust anything signed by it. Or is there another layer in this somewhere?
The timestamp server is a separate trusted entity that signs the signature asserting the date and time. It's not just metadata, it's effectively a separate signature.
> Then you would need an internet connection just to install a driver.
If you think I'm proposing any changes to how drivers are installed, then you have misread me. I'm proposing a change to how already-installed drivers are handled: absent any new information, the code that was trusted yesterday should be trusted today, and be allowed to keep running.
Imagine a scenario where a driver is installed during a network outage and with an incorrect clock. Because you need to be able to install a network driver the system will allow this security flaw. However when the system knows better its reasonable to limit the damage by stopping the driver.
You could say that any damage has already been done which is most likely true. But I can't fault them from mitigating it as much as possible.
I suppose you could modify the system to get external attestation of the time while the driver is installed and use that as a sticky bit - but its a big complication and its much better if the driver is securely timestamped in the first place.
> Because you need to be able to install a network driver the system will allow this security flaw. However when the system knows better its reasonable to limit the damage by stopping the driver.
The only way that the system "knows better" is by acquiring something like a certificate revocation list. The system does not know whether it was powered down for five minutes while the network outage was fixed, or for five years. When the system is powered back on with a working internet connection, it does not have any reliable way to tell whether the offline installation of the network driver occurred prior to the expiration, or after the expiration with a properly backdated driver and backdated system clock. There is no way to justify suddenly de-trusting a driver that's already been running simply by observing that you're in the future.
Even then you only need to verify that once and can save a time stamp in case the cert is revoked afterwards. Breaking system that has already been verified is still unjustified.
Can’t Microsoft give you an error report when they do this, to let you know what you are doing is probably very dumb?
I guess I don’t know the time when Microsoft has their code and heir contact information and is doing some kind of preflight check, or if that ever actually happens, and there are already so many ways to be very dumb with drivers...
Part of what driver expiry does is to prevent attackers from trivially banking older vulnerable versions of drivers and using them to bypass kernel protections.
Checking at install time is effectively useless. The whole point of running signed code is that you can't just load some rootkit. Secure Boot only loads a signed bootloader which only loads a signed kernel which only loads signed kernel modules. You can't do what you're suggesting without fundamentally breaking this chain of trust. What's to stop a rootkit from just spoofing that it was installed months ago?
> What's to stop a rootkit from just spoofing that it was installed months ago?
The fact that if a rootkit is in a position to performing that spoofing, it doesn't need to, because it already has the power to make arbitrary modifications to the system image.
The whole point of signing everything from the bootloader on down is to make sure that even ring 0 control over the computer can't persist through a reboot. Allowing signatures to work the way it was suggested would break any hope of something like Secure Boot ever working. As it is you're already trusting timestamping certificates to effectively live forever.
The signed kernel keeps track of when it first has seen a certificate. That is signed by a kernel, so a rootkit can’t spoof unless the system is already compromised.
Even the kernel can't modify its own code and persist through a reboot. The kernel only loads signed code that isn't malicious, the bootloader only loads signed kernels that aren't malicious and don't allow you to run malicious code as ring 0, and the BIOS only loads signed bootloaders, etc. There's a root of trust from the hardware on down that makes sure that you cannot run unsigned code as ring 0 and if there's a compromise it can't persist through a reboot. Allowing the kernel to mark certain modules as "signed" like you're suggesting would allow a rootkit to install itself via some exploit. This would render moot the whole point of Secure Boot in the first place.
I think that's better handled through a Certificate Revocation List (CRL), especially in this case where's it's fairly easy to enforce and keep up to date.
CRLs are pretty difficult to scale resiliently, though, for a number of applications. Same problem that led to OCSP stapling after OCSP became a thing. With CRLs you can at least take advantage of a CDN of some kind, but there are tradeoffs with your ability to operate a CRL securely doing that, too.
CRL in the driver install flow implies being online (at some point) to install drivers too. As we move into the future it’s hard to imagine not having Internet access, but we also don’t design Windows. It’s definitely a case they’ve considered, though I did see mention of a timestamp server in this thread (I don’t know much about Windows signing, just X.509 PKI in general).
if you by "malware defense" mean preventing stolen expired certificates from being used to sign code, then yes. if you mean by only allowing code to be "signed" for the duration of the certificate, then no.
Is that true though? Could a malicious driver be signed with a compromised key and distributed? Seems like a useful feature to be able to mark drivers as compromised.
> There's no reason to re-verify the signature every time the driver is used.
I was replying to this part of your comment. It does seem worthwhile to validate the signature of the driver every time the driver is used if that check would reveal when a certificate has been revoked for having been compromised.
Agreed that the expiration time is not particularly useful for this purpose.
> It does seem worthwhile to validate the signature of the driver every time the driver is used if that check would reveal when a certificate has been revoked for having been compromised.
It would be much more efficient to scan the list of installed drivers every time a certificate revocation list is updated, because certificates are revoked much less often than operating systems are booted.
And there's nothing gained by just checking timestamps if you don't have a new certificate revocation list. If the driver is already installed and was trusted and running yesterday, you gain no security by deciding to not load and run that driver today, unless overnight you acquired new information that the driver is insecure or malicious. The ticking of a clock does not convey any such information.