The "Universal ADB Driver" for Android devices[1] also installs a root CA, however it instead generates the CA during install, signs the driver, deletes the private key, then installs the CA and driver.
When i was modifying my HTPC from Windows 8 to 10 i also experienced that annoying protection. I used the Cmedia BitPerfect drivers on my Windows 8 machine (https://code.google.com/archive/p/cmediadrivers/wikis/Bitper...) to have almost perfect DTS throughput, but couldn't use these on Windows 10 because not signed. The Leshcat people kindly provided me with a signed version.
I think kernel mode drivers have more stringent signing requirements than user mode drivers. A user-installed CA definitely cannot be used to silently install a kernel mode driver.
You need to install a user mode drive to leverage already existing kernel code with your device.
I assume this is what ADB does, using the Microsoft provided WinUSB kernel mode driver and associating it with your mobile phone USB vendor and product ID. There's not a single line of code in such a driver, just some INF descriptors.
There might also be different forms of user mode driver, not sure how they work.
> You need to install a user mode drive to leverage already existing kernel code with your device.
You mean a user-mode application? A user-mode driver is something you write instead of a kernel-mode driver (when it's possible), not on top of it. (?)
I'm already aware of this and I don't understand how this answers the question in the comment you replied to. Maybe you meant to reply to a different comment?
Yes, a user-mode driver is something you would write instead of a kernel mode driver, if you can. Kernel-mode code is the most powerful, but also poses the greatest security and stability threat to the computer, so Microsoft locks it down the hardest. If you don't need the extra power, you can write a use-mode driver that uses Microsoft-provided kernel components (Like Winusb.sys) and you don't have to go through the same security procedures.
It should be possible, but be like Chrome Extensions where there's an entirely different flow to install a non-WebStore extension. Though perhaps Chrome's doesn't provide enough warning...
It absolutely could. The alternative is that Microsoft has complete control over what software can run on any Windows PC. It turns out that people who want this ("Man. I wish I could only run programs chosen by the monopoly supplier") already have an iPad so this basically just creates a backlash.
I know what you're thinking "Oh, well there could be an exception for when you need it, you'd just use admin to authorize it or something" and that's exactly what this is.
> The alternative is that Microsoft has complete control over what software can run on any Windows PC.
That would be the relatively little known (and new) Windows 10 S, where only apps from the Windows Store can be installed or run. Designed for security (?) and to compete with Chromebooks.
But the desirability is proportional to the level of moderation of the store and Microsoft's is complete shit: it's full of not-quite-malware and not-quite-scams.
It turns out that people who want this ("Man. I wish I could only run programs chosen by the monopoly supplier")
Or people don't know/care. Or weighed the comparative downsides of a controlled app platform versus the wild west, and decided the controlled platform is less of a downside for what they want to do. Or lots of options, really.
Well, the same is true of any osx pre high sierra. A malware can disguise as installer utility and it can make modifications to system or install kexts after user okays it (keychain saves the password).
Linux makes you type in the password manually every time for elevation but that will just make the average consumer remove or use unsafe passwords for their accounts.
Sure you can customize /etc/sudoers to your heart's content but that's not the point here. (btw, you can use group policy to enforce password at UAC prompt and limit users from system wide changes to Windows).
The point is about a secure & safe way for a common person to authorize an application that wants to make changes to the system and as the defaults stand, sudo prompting for password at every elevation attempt is worse off in my opinion.
Driver signing is not really about malware, but rather about Microsoft trying to enforce a bit of quality control on the shit-show that was driver development by noname hardware manufacturers. If a driver is signed I can look up the owner and bash him for his crap quality code. Similarly, any manufacturer caught doing this sort of trick would be blacklisted.
This seems a better and more secure solution than buying an actual certificate.
If they had a single certificate and used that across devices then the private key could be compromised and used to authenticate malware or pull off a man in the middle attack on an HTTPS site since your system now trusts this new CA. By generating the private key locally and throwing it away, you can be relatively confident that no one has the private key to this new root CA.
What's silly in the first place is that something not trusted to install unsigned drivers still has the perms to install a new root CA but given that constraint, this is a better solution than what Savitech did.
I wouldn't trust anything closed source to forget that private key.
I can't see how buying an actual cert could be more risky than installing a new root CA. The goal of signing is to ensure origin and anti-tampering: two fails in this case. So now you may have a tampered with driver that doesn't remove the private key and uses the new CA to inspect your TLS traffic, and you wouldn't know.
> If they had an actual root CA with a private key, that private key could be compromised and used to authenticate malware or pull of a man in the middle attack on an HTTPS site.
If they had an actual root CA with a private key, they'd sign it locally (on the company machine). In no scenario would the company's private key be given to a customer (unless we're talking about Adobe).
1) Trust some company to keep a very important private key secure for a long time? (with attackers knowing it's a single high-value target)
2) Or be confident that the private key was used once and destroyed forever? Even if the private key generated on your device could be recovered it would only be good for an attack against you making it a lower priority to attackers.
Or be confident that the private key was used once and destroyed forever? Even if the private key generated on your device could be recovered it would only be good for an attack against you making it a lower priority to attackers.
Doing it that way completely undermines the reason for having a cert in the first place. You might as well not have one at all.
The difference is that with the on the fly cert, you blindly trust one piece of code, at one point in time, and when it did not lie to you then you will be safe from it later. A conventional cert owner on the other hand could theoretically turn on you any time (e.g. when ownership multiplies into pwnership) once "automatic trust" for the next binary is established.
I'd still prefer the latter, given reasonable standards in terms of key handling, but the one-time trust is not completely without merit. It would certainly be more reasonable though to just allow one-time blind trust without forcing the installer to create a certificate that may or may not be as private as advertised.
There's a difference. With auto-generated root certs you can't just steal one private key, sign your malware with it and push it to all users of the original software.
It is! If nefarius and the other ScpToolkit guys are listening (a popular toolkit for Dualshock 3 and 4 controllers on Windows, see https://github.com/nefarius/ScpToolkit): this might also be an appropriate way to make their self-signed drivers work again on the latest Windows 10 update.
I think that it will allow you to install the drivers, but have an extra red pop-up with a warning about unsigned drivers.
To get a clean, professional-looking installation, you've got to have a primary signature that chains down to a trusted root CA and also a cross-signature, which is a Microsoft cert used to sign the code's root CA's certificate.
I'm not clear on the difference between a driver and a kernel module in the Windows world. As far as I could tell, we were talking about installing drivers, and the "Signature requirements for it to just work" seem to list just a SHA-2 cert from a trusted root CA for installing drivers, even for Windows 10.
Maybe there's something I'm misunderstanding here. I've set up Windows codesigning, but it was according to the specifications of the Windows devs; most of my own work has been in Linux.
Yeah I can see why you're confused about the terminology. I'm not aware of anything called a "kernel module" in the Windows world (despite the name on that page). They're all just called "drivers". If they're for devices, they're called "device drivers"; if they're for filtering the file system, they're called "file system filter drivers", etc. It seems like on that page they're referring to the actual binary as the "kernel module" and to the overall package as the "driver package" (not all of which is executable kernel-mode code).
The distinction being made across the columns is between installing the driver (i.e. putting it in the right directory and setting up the settings and everything so that it can be loaded) versus actually loading the driver (i.e. telling the kernel to execute the code immediately). They require different permissions. You need to satisfy both for your driver to run, and you can see that MCVR is a requirement for loading a driver on newer Windows versions, i.e. you need trust from Microsoft, not just the user.
Now as some people are pointing out, Microsoft also has a user-mode driver framework which doesn't seem to have the requirements of the kernel module. (On the other hand, it exposes more limited functionality.) So if you're writing a user-mode driver then you might not need trust from Microsoft. But that's not what I generally mean when I say "driver"... to me "driver" implies kernel-mode, or at least the union of the
two. It certainly doesn't just refer to the user-mode kind.
> I'm not aware of anything called a "kernel module" in the Windows world
I ran into that last night. It was implied in the MS documentation that all kernel-mode drivers in Windows were "loadable kernel modules".
Anyhow, thank you for the clarification. That's kind of the direction I was thinking, but it's nice to see a more concrete description.
I'd also suspected that there was a distinction similar to "kernel-mode driver" and "user-mode driver", but all that I saw in Microsoft's documentation when I was looking last night were the descriptions of the differences between bus, device, and filter drivers.
Reading with a clearer head now, I found some more clarifying material. VxD was the earliest Windows driver model, supplanted close to 20 years ago by WDM. On top of that is the WDF, which sounds like it was first introduced just after WDM, and complementing it. And that's the one that has a separate Kernel-Mode Driver Framework and User-Mode Driver Framework.
From my background, "driver" always implied kernel-mode, unless specifically specified. I mean, I can write a user-mode driver on a Raspberry Pi (or what have you) to communicate with external hardware over the IO pins, but it's a clearly different process than writing a kernel module. For example, I could write user-mode driver code in Python, and don't have to worry about the internal workings of the kernel.
> From my background, "driver" always implied kernel-mode, unless specifically specified.
Right, so it seems correct to say that you cannot load a driver using just a custom root certificate. You need it trusted by a root certificate from Microsoft, which (assuming that is correct) means much of this thread is wrong.
I am not saddened by this event, but by the fact that such occurrences will only add momentum to the movement to lock down computing devices and take freedom away from their users:
Those worrying about security should remember that device drivers already run in ring 0 and can do anything they damn well please.
Thus I say: Good on Savitech for not being afraid to rebel against; and fuckings to the corporatocracy that is certificate authorities and the authoritarian security industry.
I am with you here, as I've been for many years (you link to a comment of yours that links to a comment of mine, for that effect). I'm even fond of saying, "security vs. fun - pick one". But I start to increasingly understand the arguments from the other side.
Consider: what I consider an essential "fun" of computing is being able to alter software running on my machine as I see fit. If I want to make it so that Windows Notepad is pink, or supports Emacs shortcuts, I should be able to mess with both binary on my hard drive and running process in memory, because it's my computer and my rules. But the same mechanisms allow an evil person to make my mother's Notepad look like her e-mail account login screen and exfiltrate data from that. I dream of having an OS as malleable and tightly integrated as Lisp Machines were, but I wouldn't dare connect it to the Internet these days.
So what can one do? How to approach it? Is there even a way to create a computer that both respects the end-user as its rightful owner and can be safely used to conduct business and pleasure on-line? I honestly don't know if this is even possible in principle. If it is, I would appreciate being pointed towards possible solutions, because this - I believe - is a case worth fighting for.
I like the approach of Chromebooks. Safe and secure in its default state. Can switch to "dev mode" which gives you a less secure but much more open system. And then can open the case, remove the write-protect screw, and have a fully open system. If any problems develop, or you simply want to go back to a locked-down secure state, you can very easily reset the machine to it's original configuration.
My personal approach to the problem is multiple devices. Linux laptop and Windows desktop for my open systems. iPhone and Chromebook for when I don't want to worry.
Chromebooks are close but one should be able to keep their changes and reactivate the security screw so no further changes maybe made. Simply, manual switches to protect UEFI/BIOS and TPM changes are better move then none or all approach the Chromebooks take.
>So what can one do? How to approach it? Is there even a way to create a computer that both respects the end-user as its rightful owner and can be safely used to conduct business and pleasure on-line?
Take a look at Qubes. It virtualizes almost everything that is done on the system and it had a very solid security model. An example: your banking VM can be clearly marked and distinct from the (potentially one-time) VM you used to pen that dodgy-looking email attachment.
The problem is that “end-user” means two very different things. Most hackers still think of “end-user” as you and me (note the answers you’re getting), and this is why Free Software is losing battles left and right, particularly in mobile. Stallman’s comments re: iPhone, for example, demonstrate that as an industry and advocacy group we still don’t really understand this.
We have to accept that those who need rigid, inflexible computing to protect them far outnumber us. People don’t care if they can rewrite Notepad or read its source code, they care that Facebook works and that they don’t get viruses or added to a botnet. The only way to develop a healthy advocacy here is to understand that the hacker ideals and customizability that we expect of a computing system really make us a vanishing minority and acknowledging that for the now-average user, those ideals make less and less sense as time goes on. We had our run, then everybody else found computers. Times change. It’s not bad.
Is there a way to create your computer? For us, probably. For them, I’m increasingly believing it isn’t. This isn’t a knock against anyone, just an acknowledgment that there are almost certainly two answers to this question and Free Software ideals and beliefs aren’t equipped to handle the much, much larger answer. Proprietary operating systems, walled gardens, Internet centralization, it plays toward all of the ideals Free Software has been holding dear for decades. We have to evolve our thinking, I’m afraid. The less we acknowledge that perhaps Free Software is wrong for the average user, the less we will have a voice at the table; eventually, nobody will listen at all.
Hell, many cars don’t even allow you to work on them any more. Look at Teslas, higher-end Audis, etc. I offered to change my neighbor’s oil in his Audi and he got scared about his warranty.
It's good you brought up cars, as they're a perfect example of conflicting needs. Cars are now computers on wheels, and while I'd love to drive a car I personally modded at firmware level, I would also be against allowing such cars on public roads. One, hackers make mistakes, and two, malicious actors would trick regular people into modding their cars to further their malicious goals. Both reasons create public hazard. Which is obvious in case of cars, but less obvious with computers connected to the Internet.
At this point I fear we might need to fork computing entirely - let the regular users live in "hell" of propertiary, locked down services they don't actually own, while ourselves, we get the "heaven" of free software... that's pretty much not allowed to interact with regular users. I don't see how to keep the two worlds as one, because all features meant for hackers can also be used by malicious actors to pwn people.
Consider e.g. dev console in a web browser. All is cool, because regular users don't know what F12 does and wouldn't even think of pressing it. But then Facebook and others have to put Self-XSS measures into place, because a malicious actor can tell a regular user what F12 does, and how it "can" let them see who viewed their Facebook profile...
I hate the idea of split world. It means I won't be able to e.g. automate my banking or pizza delivery, because those things will have to go through "dumb" computing, to avoid self-pwnage risks. It means that eventually I won't be able to even get a general-purpose computer, because the nature of niche markets is that they generally don't get served with good stuff at accessible prices - they either get served at exorbitant prices, or don't get served at all.
So I don't want that split world. I want an alternative to fight for. But as I previously wrote, I can't see any.
I am of two minds of such a split world, as all too often it means that when a geek is called on to fix something for the rest of us, they invariably can't because they can't get the right access.
Never mind that i fear that the black hats of the world will always find a way to break the sandbox of the "safe" computers, and thus we are effectively fucked. Because now they have access on a level the rest of us do not, and thus can't counter their actions.
BTW, i do believe Cory Doctorow has done a couple of speaking tours on this under the titles "The coming war on general purpose computing" and "The coming civil war on general purpose computing".
The first being about government demands for a computer that do everything but some naughty action, and the second about well meaning geeks locking down computers to make them "safer".
We already do split the world, if you think about it. Despite looking and behaving alike, servers and workstations/phones/etc. diverge quite dramatically. You manage them differently, interact with them differently, use them for different purposes, and so on. Every hacker can relate to this: "don't use root for day-to-day work." Well, do I need root on my workstation? Maybe not. Do I need root on my server? Yes.
I think split world is livable if that divergence is embraced. Running a Xeon server as a workstation is something a lot of developers do. As clients get more locked down, it's probably the direction we'll have to go, and really differentiating what makes a "server" from a "client".
> have to accept that those who need rigid, inflexible computing to protect them far outnumber us. People don’t care if they can rewrite Notepad or read its source code, they care that Facebook works and that they don’t get viruses or added to a botnet
And yet the epsilon between "Facebook" and "malware" continues to shrink. I'd personally rather have my phone ownt by an impersonal botnet that's just going to send out some "growth hacking" spam or look for bank passwords, than a person-targetting adversarial application that is designed for the purpose of creating a long term surveillance profile on me. But of course everyone is free to choose their own kinks!
What we have to accept is that "people" actually have no fucking clue what they want. Advertising tells them, and social proof reinforces it. What the Free Software Foundation doesn't "get" is money, as in enough money to carpet bomb their own popularity into public consciousness. While promoting end-user freedom creates massive economic wealth, capturing that wealth predictably is inherently impossible.
Having said that, the surveillance industrial complex will certainly continue catering to this mythical "average user", as they gain when people self-identify into this group and thus resign themselves to sharecropping. But it's foolish to buy into their disempowering least-common-denominator-database-row uniformity paradigm, any more than we need to pretend to do so to eat. The only reason the Free world actually needs to care about the next generation of GoogAppAzon middlemanning is that economics force us to piggyback onto their hardware.
I’m sorry, no. If the FSF had a billion dollar budget, you’d still lose people immediately at the first instruction to open a terminal and become root. Your comment actually illustrates the exact point I’m making: the epsilon between us and them is widening, because the loudest voices want people to look at computing the same way our heroes of the ‘70s and ‘80s did and don’t realize this is unproductive. Freedom is too often equated with the way we have done things before.
You or I buy a phone and look for a terminal. They buy a phone and go straight to the App Store to install Facebook. And yes, people know what they want. They are on Facebook because that’s what they want. Look at pictures of my newborn, group of friends. Oh, all my friends are on Facebook and it lets me do that. What’s that, GNU Social? I need to figure out what? I need a server? Mail is decentralized. How many average people don’t use a central service like Gmail? People are choosing the things Free Software doesn’t like because for them it is better. Full stop.
Free Software has no competitive answer for anything the big names are doing for the average user. Not one thing. There’s enough people interested in it that competitive answers could be developed and find a market. The problem is unrealistic ideals (decentralization, often; how much of global SMTP traffic is spam, again?) placing unrealistic requirements upon a project and focusing all of its resources on things the average user doesn’t care about. What if I wrote a Facebook that cared about privacy and, somehow, miraculously got billion MAU traction? You’d be right back banging the same drum making you irrelevant about surveillance state apparatus because it’s centralized. Except the whole point of Facebook is network effect. So.
You’re deluding yourself if you think billions of people are clueless and Free Software has all the answers, if only there were a little more budget. Just look at the worldview evident in your comment. That sort of talk will make most anybody glaze over, and you immediately lose your chance to pull them to a better way of doing things. “Only use things to which you can read source code.” What’s source code? Nobody cares about the ideals. Adapt accordingly if you want to effect meaningful change. That’s my point.
There’s literally decades of proof based on what’s successful that we don’t have a firm grasp of how to adapt software freedom to Joe Random. You and I both know it’s better. Now figure out how to tell them and design systems that are palatable to the average user.
That’s my other point: I don’t think a customizable system with Free Software ideals in mind can sufficiently protect and be useful for the average user. I want to be proven wrong.
If the FSF had a billion dollar budget, they could copy the UI and infrastructure of Faceboot, include hooks that allowed people to run their own code/servers/etc, and then pump the resulting product LA^h^hSV-style until it caught on. There would be no need to criticize it for being somewhat centralized, because the option to run your own part would exist. And rather than powerlessly complaining about unilateral corporate decisions, even normal users could share software fixes instead.
The FSF itself might specifically not be capable of this (due to an aesthetic formed from decades of waging trench war), but a generic Free software organization certainly would be. What's missing is the billion dollars to develop, refine, and endlessly promote this idea, because there is no scalable profit in actual Free software!
I don't think we're actually disagreeing on the above lead-in topic - we're just coming at this from wildly different paradigms. My comments are not trying to convince "billions" of people - I'm resigned to the fact that people will tend to choose the shiny promoted thing, even when the shininess itself is paid for by the harm being done to them!
But that does not mean that people who should know better (eg programmers) should find justification, or even escape scorn, for creating such things. Eroding users' ultimate control over their devices goes past the paternalism of protecting careless/ignorant users, and into the territory of malfeasance. Such things are against any basic sense of professional ethics - the same type of principle-agent violation as a sysadmin who creates a backdoor for their future self.
To bring this back to something concrete, look at the paradigm of control promulgated by contemporary bootloaders. Verifying running code is a worthwhile goal, but there is no reason this requires trust rooted in manufacturer-kept keys. Rooting the trust in symmetrical algorithms or user-loaded signing keys would accomplish the same exact goal while still preserving user freedom. And then the default scheme of only trusting code signed by the manufacturer would be implemented on top of that. But doing so would require a little more implementation complexity (giving up control inherently makes things harder), and be less profitable.
> There's literally decades of proof based on what's successful that we don’t have a firm grasp of how to adapt software freedom to Joe Random
My above points are essentially refuting your point here. The lack of mass-appeal Free software does not imply we don't know how to make it, as it can also be explained by simply not knowing how to fund it!
There are plenty of things in life that have rough edges and "average users" have ultimately learned to "not do that". Software is at an analogous stage to people putting water in the engine oil because of "one simple trick", but this does not justify welding the hood shut on all new cars!
I think what we want can be built on capability security. https://sandstorm.io/how-it-works is a more recent start on this sort of thing which saw a proof of concept in CapDesk back around the turn of the century.
How is hardware switch going to let me (and software I write) do whatever I want with the machine while preventing software from the Internet from doing the same?
You're suggesting a hardware switch capable of turning a general-purpose computer into a completely locked down computer, with a trusted chain all the way from physical switch to the code doing "computing mode" transition?
Sounds interesting, though I fear that at that point, companies won't even bother providing the "general-purpose" mode anymore.
the first ones were released with a physical switch behind the removable (shock horror) battery. Later ones have a magic key press that do the same thing (though one variant that Google sold directly had a flaw where the state was stored in volatile memory).
This is how people install things like Crouton and whole Linux distros.
Note btw that by default a Chromebook updates silently in the background (in a fashion that Android gained with 8.0).
It has two partitions, one is active one is dormant.
when a update is released, it gets applied to the dormant one. And on the next boot, the two a flipped. If the newly updated partition fails to boot, the system will switch them back on next reboot. And if it succeeds, it will become the active one while the older version goes dormant until another update is released.
Keep in mind that little if any state is stored locally.
And frankly, if Mozilla wants to get back in the game they should consider producing a similar system that also offer a set of backend services (or collaboration with others that can offers said services) and the means of bootstrapping such services independently.
Because right now while Chromebooks are being adopted rapidly in USA and Canada, European nations a weary thanks to the question of where the data is stored and who can potentially access them.
It is weird, because we're not talking about a switch to turn networking off, we're talking about switch to turn general-purpose computation off, making your computer a completely locked down sandbox. This is unprecedented (aside Chromebooks, it would seem).
While computer hardware is more ubiquituous than ever, actual usage of general purpose computing is vanishingly small. I had no idea how bad it was until recently. Linux and "open source" are just buzzwords. Their use should be taken as a sign that people are computing freely. Even programmers who use "open source" software don't actually have complete control (see the furore over Microsoft changing the colour of an icon in a popular IDE).
Corporate proxies today are obscenely intrusive and if anyone even knows what a "root CA" is they have no idea what it's used for. Many places get people to install them on to personal devices and of course most people do as instructed. This is the environment in which most people do their "computing" and it's all they know.
>Those worrying about security should remember that device drivers already run in ring 0 and can do anything they damn well please.
Not in a proper microkernel, so that's fixable.
>Thus I say: Good on Savitech for not being afraid to rebel against; and fuckings to the corporatocracy that is certificate authorities and the authoritarian security industry.
"Fuck to the CAs" I get.
But there's nothing about what Savitech did that's good.
Savitech didn't rebel against control, they were just lazy in not revoking/limiting something they, by their own admission, did not need any more.
If you don't want to participate in mainstream computing with it's certificate authorities and authoritarianism, there are always alternatives for you to use.
Use Linux, use hardware which focuses on freedom and privacy, these options are freely available.
Maybe it's time for desktop operating systems to adopt permissions systems like smartphones. Permission for network access, permission for non-current user files and registry, permission to install certs.
Unfortunately all "secure" or "trusted" computing efforts seems to be focused on depriving the owner of permissions and command over the computer, and instead transfer that to large copyright holders.
But I suppose the Android security model would make sense, which seems to be based on a traditional unix security model combined with that each program will run as a separate user and having it's own set of group memberships.
As long as I don't need to install a rootkit on my own computer.
The problem is that non-technical users have no information on which to base their decision to allow or deny.
Consider a user that has no idea what SSL, TLS, Certificates, Encryption, HTTP, drivers, program signing even mean. What do you put in the prompt that would allow the user to make an informed decision about whether a program they downloaded should be able to install a cert?
We already have such prompts when trying to connect to a HTTPS website with an invalid/expired cert. It does a good job at discouraging to proceed, as it should. I see no reason similar prompt couldn't be shown when trying to install root CA on Windows machine. The problem with current Windows prompts is that they are all alike and shown too often, so users simply learned to ignore them. Actions, that may seriously affect safety and privacy (the category root CA falls into), should be protected by distinguish prompt, not the boring "run as administrator".
If Microsoft (or any vendor) wants to sell security, they have to be responsible. Then they have to sign the drivers.
Yes, it's a whole lot of Single Point of Fuck, but that's what it takes. Hence we have the CA model. We have a "few" trusted authorities.
This could be made into a reputation market thing. So the user could buy security from a vendor. If a vendor is too strict, it'll have few users. If a vendor is too lax, we need a negative signal to penalize its reputation, maybe IP packets should contain a sort of fingerprint of the vendor. So if we see a lot of spam/DDoS from a vendor, it should cost them.
I have watched users blindly click past dialogs that must have been showing up for them daily for years without ever showing the consciousness to click the "Do not show me this again" checkbox that has always been on that dialog.
That's good. They do that because it's easy to misunderstand the question. Closing the premission request is almost always the safest response. In this case, it would be the same as "Deny".
Now we're talking about manipulative UX. I would even go as far as to call that "malware".
The original discussion was on the level of "assuming we can trust the OS that it's not trying to trick us, this dialog helps us decide whether to trust the app." As we have seen in the past, Windows no longer upholds this assumption.
The problem is how do you educate users on these prompts.
Same way you educate people on how to vote: functional literacy [0]. If people are functionally illiterate, they are going to struggle in all sorts of ways. One of the big drawbacks of our society is that its complexity seems to be growing without bound. This places ever-higher demands on people's ability to read, interpret, and act upon important information in their daily lives.
I consider myself tech savvy (by no means do I understand that much more than an average user). When I tried to enable my company’s mail on my personal iPhone, I was prompted to install a certificate. In my haste I thought it was just a certificate for *.my-company.com domains, but iPhone showed me a message in the likes of “the owner of the certificate would be able to read/modify all my network communication, monitor/install/uninstall all apps I have... “ and so on. That was enough to stop me in my tracks and now I’m waiting for a company phone instead.
That's pretty common, and is one of the reasons for some of the third party exchange clients on Android devices. At least one of the early ones (Nitro?) maintained its own separate data store for email and possibly other things so that when policies were used to wipe data it could wipe only the data in that app instead of the entire device.
The counter to that is that now I believe there are a bunch of Exchange clients on Android that will simply ignore server policy or where handling of policy can be controlled in the settings, which kind of defeats the point.
The original point of all those policies was to be able to erase supposedly secure content if the device was lost or if someone left the company for whatever reason.
Edit: the Exchange client I was thinking of is TouchDown, now owned (and EOLed) by Symantec but I believe originally from NitroDesk.
Yeah, my company requires something like that if you want to access their stuff from your phone. By doing so you enable them to erase (non-company) stuff from your phone and monitor it. Most employees just say no at that point.
"If I say no, will my app work? Because I'm installing this app so I want it to work. I'll probably just say "yes" because why would it ask me if it didn't need it?"
This is ultimately the problem. The dialog could say "This will trash your computer, empty your bank account, and kill your dog. Do you want to continue? YES/NO" and users will click YES if that's the only way to install whatever software they want to work.
This issue is hosted by developers making apps unavailable for installation lest agreeing to every requirement.
Developers get away with this because, 1) individually selecting permissions is growing rare and 2) there's no pressure to explain why a permission is needed, nor specific contracts to agree upon on how the general permission may be utilized.
Android used to about a crappy Samsung app on my phone in the details view:
- Using sys.whatever.whatever [redtext][Should only ever be used for debugging.]
I can't remember what app it was but it had an insane amount of unnecessary permissions even though it was just a simple app. I used to tell people it was my NSA app.
That works on phones because app developers desperately want their (typically) free app to be installed and permissions is one thing that turns people away, so they try to minimise permission requests. Additionally, apps (at least on iOS) are expected to work even when some permissions are refused (eg. camera access for a shopping list app) so the permission request is often one you can genuinely say "no" to.
But device drivers for a desktop machine? The user has paid good money for that device and are going to grant every permission they need to get it working. Asking for each permission individually is just noise.
This is exactly what Windows 10 is doing with UWP apps. People hate it and complain about it and give it a bad reputation. But UWP apps have drastically reduced ability to torch your OS without permission, and have Android like permission grants on the Store page which say what access the app has to your system.
But the problem is the user interface and programming environment is shit for anything past basic stabby finger novelty apps and no one trusts them enough to invest heavily in it. Oh and the store is a desert of turdblossoms.
Well, kind of. The issue is that Centennial also mostly overrides UWP's sandboxing. (Notice Centennial apps have "full access permission".) This is not a good solution, it is a stop gap.
UWP is more than capable of supporting advanced, quality desktop apps. The issue is just that while Windows 7 is so prevalent, developers have little reason to prioritize native UWP dev, which won't run on half the Windows userbase.
And how would they do that? There is the Mac App Store and Windows Store, but most apps are still installed without using them due to their restrictions.
System call returns an error code if you lack permission. The default handler could be to ask, but if the user says no then the application has to handle the error or crash.
I was one of the people who went ape over uac. It wasn't that uac showed up too much. It is that the devices were not capable of it. I think I've been vindicated by the subsequent lawsuit which revealed the vista capable vs vista ready fiasco. I mean literally the screen would go blank and there'd be no indication for up to about three minutes when a uac came on an hp Compaq laptop...
Think about it this way. I've never seen anyone complain about full disk encryption on an iPhone 6 or later. Do the same on a Windows machine with 5400 rpm spinning rust...
I would honestly be more worried about the root CAs which are enabled by default in the most popular OSes and browsers, with root CA privileges for government of China controlled entities, Turkish government entities and unethical/shoddy root CAs such as Symantec. The Netherlands recently passed a law allowing the government specifically to use false keys and run MITM on crypto, which brings into question all .NL based CAs.
Those CAs are at least accountable to browser vendors. (Symantec, for example, is currently in the process of having their root certs distrusted by Chrome and Firefox as a result of their violations of the BRs.) The same can't be said of private root certs like this.
A good question to be asked might be why we are still conflating the certificate store used for software signing with the certificate store for domain validation. They're two entirely different problem domains using the same tool. Much like it doesn't make sense to use a drywall screw to join eyeglass frames together despite both attachment mechanisms requiring screws, it doesn't make sense to allow a system to validate a web domain as being trustworthy because an audio driver requires a certificate trust to validate itself as being legitimate.
Most of this separation is done on good-faith already, but it should be done in a more discrete manner.
There is an upcomming advisory (i.e. non-binding) referendum about this law coming up.
Specifically, the referendum is about reversing the law.
Notably, some parties in the newly minted government have declared their intention to ignore the referendum. They back this by two arguments "It is needed for security" and "We are going to remove the advisory referendum anyway, so we get to ignore this one".
That second point is kind of interesting, because the referendum is possible due to a rather new law. We had one before that went rather poorly, so now we want to get rid of it.
The actual law is here [1] this site [2] advocates for the referendum. I'm afraid I don't know of any english sources.
Quoting from the law, and applying my own translation
>>
Article 45. Member 1
The services are authorized to:
a. (Basically, do exploratory searches of networks)
b. Use false signals, false keys, false identity or intervention by third parties to gain acces to automated systems. This can be done with the help of technical tooling.
Article 45. Member 2
The authorization from member 1b above also authorizes:
a. The defeating of any security measures
b. Installing technical measures to reverse encryption on data stored or processed by automated systems.
c. (references article 40)
d. To copy data stored or processed by an automated system.
Article 45 Member 2
(summarized, the government needs to give written permission for any of the above to happen)
>>
This seems to be the referenced passage based on a preliminary search.
In this specific case (installing a root certificate) I would say it's not actually a CFAA violation because they're not "obtaining information", "defrauding", or "intentionally causing damage". You might be able to argue that installing the driver on a "government computer" produces a violation of 18 USC 1030 (a) (3), but the rules for a mere "protected computer" which covers most internet-connected personal computers are actually not strict enough to cover this.
"Whoever [...] knowingly causes the transmission of a program, information, code, or command, and as a result of such conduct, intentionally causes damage without authorization, to a protected computer [...] shall be punished as provided in subsection (c) of this section."
"[T]he term 'protected computer' means a computer [...] which is used in or affecting interstate or foreign commerce or communication, including a computer located outside the United States that is used in a manner that affects interstate or foreign commerce or communication of the United States"
"[T]he term 'damage' means any impairment to the integrity or availability of data, a program, a system, or information"
That charge would depend entirely on whether or not installing a root certificate can be considered "intentional" "impairment to integrity". I can't find any case law that would provide the interpretation for this, but I think it's safe to say you probably won't be able to overcome the burden of proving that they caused the damage intentionally if it even qualifies as damage (which I find unlikely).
Law is a different language. While it may read like computer code; it isn't. Well, it is but it's as if "goto" and ";" had different meanings depending not only on the last keyword used, but also on the mood of the computer. I wish a straightforward reading was possible but it's often not.
Those exact ambiguous laws are the ones that are abused most easily.
There is a hard balance between being general enough to cover bad things, without being so general as to allow authorities to prosecute anyone on some charge.
If so, there will be a long drawn out investigation at taxpayer expense which results in a fine which is both a) too small to impact the revenue of the company, and b) immediately waived on the condition that they stop doing what they already can't do by law.
It seems unacceptable to me that the updated drivers do not automatically uninstall the CA. How is an ordinary user meant to navigate the certificate store and delete the CA?
Phrased differently: operating system Microsoft Windows allows silent installation of Root Certificate during installation of unrelated USB driver installation, despite featuring a micro-kernel design.
Can someone explain root certificates to me and why this is an issue? I know they sign certificates with a private key at a high level, but don't get the implications of that generally.
Think of trust relationships as a forest of trees. Initial seeds of these trees are root certificates installed by browser vendors(Chrome: Google, Safari: Apple, Firefox: Mozilla, etc). Whenever you open an https site, if your browser checks whether to trust that https site. So, your browser trusts it if that site's public key is signed by the one of roots shipped/installed by the browser vendor.
You can also add your own cert to the root certs list of any of the browser, then any site's key's signed by your cert is gonna be trusted. Next time, check whenever your browser says "Don't trust this site", then explore the key chain.
Does this mean savitech needs to hypothetically set up some mitm attack somewhere and wait for you to send traffic, then they can decrypt and read, or does it mean that they can do that direct from your computer by virtue of that root certificate?
They would need to MITM you. But take into account that it doesn't need to be Savitech. If Savitech was compromised, an attacker could get access to their private key.
In a sense, your security becomes dependent on the security of Savitech. I imagine their private key is not as securely stored as a real CA would store theirs. (e.g. with Superfish, Lenovo included the private key on all laptops, for anyone to grab[1])
They would need to MITM you somewhere, but that could probably happen in their audio driver that's already installed and running on the target machine.
Well you can look at the certificate store by running certmgr.msc, but it's a dangerous game - do you trust Go Daddy, COMODO or Symantec any more than you do Savitech? They have all at one point or another given reason to not even entrust them with organising a piss-up in a brewery.
Other applications like Firefox have their own independent root CA store.
Mmm. I think the brewery test is an unfair comparison. Our problem is not that the major CAs are hopeless. If they were hopeless we'd have abandoned PKIX years ago. Instead the problem is that they're good but not as good as we'd like.
We are looking for somebody who can run aforesaid event fifty times a year for the general public without anybody falling in any of the machinery. In hindsight drunk people in an industrial workplace was a mistake, and so we can and should demand they do their best to make it safe, but perfection just isn't to be expected.
>Microsoft provides guidance on deleting and managing certificates in the Windows certificate store
Microsoft should mark these as malicious and quarantine them using their built-in AV. If the end user needs them he can remove them from quarantine. Posting advisories no end user will ever see isn't helping much.
malware is certainly a strong term, and generally the definition seems to include computer code, which would exclude installing a certificate.
However, once you have installed your own root CA certificate on a computer means you can read all HTTPS traffic originating from that computer, and fake responses. Likely, thanks to having installed that certificate you can read someone's emails, move money out their bank account, and view any files they have stored online.
The effect of installing a certificate is broadly similar to the effect of installing a keylogger, and in neither case have you been given a right to do so. In both cases you have altered someone's computer in such a way that you are able to read their encrypted communications, which is certainly in the spirit of what malware means to me.
I'm sure that the intent in this case was not malicious, but we would not accept software installing a keylogger because they wish to measure your typing speed, and we should not accept this.
As described above, some versions of Windows require drivers to be signed proving who made them. For this to work Windows needs a list of CAs trusted to issue the certificates. Whether "I am not paying somebody £100 for a cert" constitutes a valid reason is arguable. But that seems to have been their plan here.
Yes, but CAs can charge money based on (more or less) number of CA root cert installations on the target devices for a company.
Some of your competitors have had their current root certs in device preinstalled for a lot longer than you. Entrust and GlobalSign have 2048 bit roots with Not Before before 2000.
If I'm going to go with a Johnny come lately root, I may as well use LetsEncrypt because it doesn't cost money. Also, audio drivers may get you desktop share, but getting into the platform store on mobile is a lot harder.
The major trust stores partition their trust of a root by purpose. So Microsoft's trust of a particular root for signing certs that are used in driver code signing is separate from not only Apple's trust of same but also Microsoft's trust of that root for signing TLS certificates. Let's Encrypt doesn't ask for any "trust bits" besides the Web PKI, ie TLS certificates and that's completely deliberate.
Purposes other than TLS server and /maybe/ S/MIME are not subject to any meaningful public oversight, you are entirely trusting Microsoft. Which for drivers, or Xbox games is probably fine but it's worth keeping in the back of your mind.
A bad actor could just as easily post a script as "Show HN" with some cool stuff in it that you install via:
curl https://example.com/some_script.sh | bash
A lot of people don't check those. Use the non-OSS nvidia or ATI drivers? You have binary blobs (don't for ATI btw, the OSS ones are 10x better). Use bluetooh/Wi-Fi on Linux, congratulations you are using closed binary blobs.
I still love Linux, but I don't hate windows. We're not in the 90s. Bill Gates isn't master of the Borg.
I think you underestimate the insidiousness of such systems, and your response is completely logically fallacious, and doesn't support your conclusion whatsoever. So if I installed a random script via curl (I prefer wget) and because there are still some proprietary hardware that needs closed source binary blobs... That suddenly means GNU/Linux and windows are on the same footing? No, not at all, and I'm tired of hearing that trite and clichéd response. The four freedoms matter.
1. https://github.com/koush/UniversalAdbDriver