This is a useful point of view I'll be adding to the toolbox. Looking at testing and type as TTP entities is already fun. This line:
> Making personal property functionality dependent on trusted third parties (i.e. trusted rather than forced by the protocol to keep to the agreement governing the security protocol and property) is in most cases quite unacceptable.
sums up a line of thinking I'd like to explore a bit.
> These institutions have a particular way of doing business that is highly evolved and specialized. They usually cannot "hill climb" to a substantially different way of doing business. Substantial innovations in new areas, e.g. e-commerce and digital security, must come from elsewhere.
Work I've done in finance-related shops fits neatly into the TTP model, and that model explains most of what I used to see as tech debt. As a TTP being able to guarantee a result was a core part of our business, so processes would be set in stone once people used them to do serious work. If the initial design was written and implemented by people with a grasp on each domain it touched, you'd need to have that all under your belt before thinking about real changes.
The immutability + cost factor makes existing work look foundational and innovations in processing metabolite are easy to spec out and calculate return value on.
Deem is such a very interesting word. To unilaterally suppose,
consider or judge something to be. Appears a lot in arrogant, forceful
and abusive legalese.
"You are deemed to have accepted..." and so on.
In the "forced trust" society people will be "deemed to trust". Of
course asymmetrically, for them (the abusers) it will be seen as a
"zero trust" system. For "security" (not yours).
Could have been written yesterday; it seems pretty fresh to me.
I regret TFA's counfounding of the role of CAs and the role of DNS. The bug with CAs is that the role exists at all; the bug with DNS is down to implementation.
CAs are a solution to the problem of how to securely introduce two parties that don't know one another. That's a fundamental problem, and CAs aren't the only solution, but there aren't many. IMO, CAs are a really awful solution. As TFA says, they are unreliable and expensive to operate. WoT is a better solution, but expensive to set up, which is perhaps why it's unsuccessful.
[Edit] WoT perhaps isn't a better solution; you still have to be able to find the trusted certificate, and ultimately that means you still need a trusted introducer. That means a TTP; or you meet in person, which is in conflict with the demand that this is someone you haven't met. But so what if you've met them? It comes down to bank statements and driving licences, which amounts to official government-issued ID. But suppose what I want to do or say is unrelated to my official ID, and I'd prefer to certify idependent of government and officialdom? Suppose I'm stateless, and have no government? I'd like self-signed certificates to be treated by browsers as first-class citizens.
[Edit 2] Given certificates done right, everything else falls away; you can do what DNS does using your certificate. You can sign a DNS zone, or sign any other statement you want, and the certificate itself can direct people to those signed statements.
DNS solves another problem: how to map names to [data], where [data] is usually an address of some kind. It assumes a particular naming system, and that system of names isn't fundamental. It's a layer on top of lower-level addressing systems, and could be replaced without affecting lower layers. It's dirt-cheap to operate; what's expensive and buggy is the market for names.
I agree. But the only solutions that can work are ones everyone agrees to use and, for right now anyway, everyone's settled on that model. There isn't much choice.
I disagree the problem is a lack of choice, because what realistic alternative choices actually exist? Web of trust a-la PGP key signing parties is a ridiculous solution for people who aren't super nerds (and even for super nerds it's pretty bad). Trust on first use works OK-ish in SSH, but it's hard to see that as a realistic solution for trust as a whole if you try to scale it up to every user needing to access arbitrary servers.
Maybe CAs are a bad solution, but if so it's in the sense of the quote about democracy being the worst form of government except for all the other ones. At the end of the day, the CA system has managed to deliver working cryptographic trust at a scale that galatically surpasses anything else.
The CA-identies only works for Real-Names not pseudonyms. The Web-of-trust has scalability issues on top of that.
To plug my own work: I came up with a protocol[0] to tie identities to publications, using the DNSSEC root as trust anchor. This lets strangers verify certificates and esablish trust that a message was certainly sent by the owner of the key. This allows for key exchange between strangers, assisted by a website as introducer that does not need their true identitities.
> what realistic alternative choices actually exist?
Yeah, that's a problem. I certainly can't think of one. But even if there were, it would still be an enormous struggle just to shift to using it at this point. That struggle also means that there isn't a huge amount of effort being put into finding a better way.
The CA system certainly has great benefits in terms of delivering reasonable security in a relatively convenient way, but that doesn't take away from the fact that it still sucks in ton of different ways that are inherent to the idea. It also can't be used (at least not without great struggle) in some situations.
I definitely agree that there is an inertia problem, as there is with a lot of technology. But I also think changes can happen quicker than people think if new alternatives are introduced that solve fundamental problems with the entrenched solution.
Arguably this actually has happened with the legacy CA system, people just don't think of it that way because it looks like Let's Encrypt and certificate transparency instead of something involving blockchain or whatever. LE and CT were huge disruptions in the CA space and in my view fixed a lot of the problems with the CA system, even if they didn't totally get rid of it. To borrow your excellent phrasing, they made it suck in a lot fewer ways than it used to. Maybe some complete replacement could do better, but there is a lot to be said for iterative improvement of an existing solution instead of hoping for a fantasy total replacement.
Interesting that you bring up LE as an improvement of the CA system. I view it as a fairly messy hack (with its own problems) around some of the faults of the CA system. I almost brought it up as an example of some of the faults of the system.
We both see the same thing, but through different lenses. I have no point here, I just found that interesting.
Yeah, I didn't mention TOFU because it's a non-starter if you're trying to get a secure introduction to someone you don't know.
The fundamental problem is that "identity" is problematic. How do you know someone is who they say they are? Perhaps you don't really know who you are. That's why I want a space for a certified identity that isn't tied to a government ID. Like, an instance of some software might claim an ID. Arguably, a domain is an ID. I don't see what's wrong with something or somone having multiple IDs that aren't linkable.
I agree with a lot of that, but arguably the current CA system nearly gives us that thanks to Let's Encrypt. Registering a domain requires some real government identity, but LE doesn't actually care about that. All they do is vouch to a potential visitor that they have verified TLS certificate being presented belongs to the entity that controls the domain the visitor is attempting to connect to. From LE's point of view, they don't care who owns the domain or if the owner of domain A is the same as the owner of domain B...the domain is the identity as far as they're concerned.
My only complaint with Let's Encrypt is that I'm uncomfortable with so much of the Internet trust infrastructure relying on the generosity and good will of a single organization, but it's hard to imagine a better solution to the underlying problem than what they've created (other than perhaps a bunch of competing Let's Encrypts).
Stepping outside the main thrust of the article...
> The functionality of personal property has not under normal conditions ever depended on trusted third parties.
If so, it's the sign you have a low trust society in a more general sense. In a high trust society it's easy to say "This is my personal property, no one is going to mess with it". Things like your car being out front the next day, or your goats not getting stolen, and any number of social expectations are quietly met every day. You don't behave like the police of your property all night and are able to get a good nights sleep. This allows you to get a job as a specialist and make a good income, along with all your neighbors which brings more prosperity.
Trusting vs not trusting come with different costs that must be calculated.
Well the context of the article is at the time recent, buggy implmentations of ssl and other security protocols which were at the time unproven security protocols, and government entities providing black-box crypto with either explicit or occult back doors. If by high-trust you mean I'm going to trust a government entity's protocol or implmentation of a security protocol which doesn't provide decent forward secrecy or non-repudiation, well then, even in a high-trust environment that sounds like a bad idea.
trusting is indeed probably always more efficient than not-trusting -- however, the crux of the matter is what your options are when trusting is not possible. you can always choose to trust in the presence of a trustless mechanism, but it does not work the other way around.
People trusted their shiny Teslas to be fine out front with the key protected at home. Then thieves figured out that the keys were most likely just behind the front door and used signal boosters to unlock and start the car.
A paranoid owner putting the key in a small Faraday caged box would not have that problem. With all the smart things we use today the attack surface is not insignificant and thinking about it is just common sense.
>If so, it's the sign you have a low trust society in a more general sense.
Isn't that kind of orthogonal to the question? I can envision both high or low trust societies that could have problems with required 3rd parties attached to personal property.
Even the most well intentioned and highly trusted 3rd party is still susceptible to malicious attacks, or just incompetence or plain old accidents.
Scale is the question. Some societies you can get by with trusting others, once in a while you lose, but the gain overall from being able to trust your stuff will still be there is much more.
> Making personal property functionality dependent on trusted third parties (i.e. trusted rather than forced by the protocol to keep to the agreement governing the security protocol and property) is in most cases quite unacceptable.
sums up a line of thinking I'd like to explore a bit.
> These institutions have a particular way of doing business that is highly evolved and specialized. They usually cannot "hill climb" to a substantially different way of doing business. Substantial innovations in new areas, e.g. e-commerce and digital security, must come from elsewhere.
Work I've done in finance-related shops fits neatly into the TTP model, and that model explains most of what I used to see as tech debt. As a TTP being able to guarantee a result was a core part of our business, so processes would be set in stone once people used them to do serious work. If the initial design was written and implemented by people with a grasp on each domain it touched, you'd need to have that all under your belt before thinking about real changes.
The immutability + cost factor makes existing work look foundational and innovations in processing metabolite are easy to spec out and calculate return value on.