Hacker News new | past | comments | ask | show | jobs | submit login
Mozilla and Google Objections Overruled on “Decentralized Identifiers” by W3C (w3.org)
289 points by shiftingleft on June 30, 2022 | hide | past | favorite | 187 comments



Something that should be a bit of a warning flag is that I have two decades of identity-related experience but I still have no idea what DID even is.

For reference, I've worked with three vendors' implementations of LDAP, several versions of SAML, OAuth, JWT, Okta, Azure Active Directory, etc, etc... I've even deployed Smart Card authentication in the field several times.

I literally have no idea, not a clue what DID is supposed to be in a practical sense, despite having read a significant volume of material on a subject.

Like, okay, it's "identity"... somehow? How? What? Where?

The documentation is impenetrable buzzword-compliant gibberish that makes SAML's documentation look like crystal-clear poetry in comparison.


I found it all pretty simple after looking at it briefly when I first learned about it.

A DID URI is a URI with a 'method' and globally unique part: did:method:somegloballyuniqueid.

The "did" part is literal; a standardized URI namespace. The method part is some symbol that specifies how the unique id resolves and its representation (JSON, whatever.) The method part is what this story is about; W3C has declined to enshrine a set of methods in the standard.

Instead, W3C is delegating to a registry of methods. This registry has already grown to a sizeable number.

The idea is that you will apply a DID URI using its method and obtain a DID 'document'. This document has claims, credentials, etc. The DID owner can cryptographically prove the document represents them and relying parties can cryptographical verify claims in the document.

The actual workflow is more involved than described here but that's the gist of it.

BTW, your list of identity schemes you've had experience with roughly correspond to 'method's, although they aren't 'distributed' in the DID sense.


That sounds very much like one of those solutions that is so complicated that it covers everything in theory, and in practice everyone just ignores it and picks some other standard (or 3...) that is actually narrow enough to cleanly implement and gets the job done.


It reminds me of openid 1, with the ridiculous URLs that regular people were supposed to remember.

openid died, and openid 'connect' is just a set of standardised claims for oauth.


I’m genuinely curious as to why discussions on identity inevitably also involve claims about that identity. For authentication purposes, all that’s necessary is for a user to prove they are in control of an identifiers by presenting some credentials like a nonce encrypted with a private key. Yes, one can share data in-band during the authentication process (as part of the certificate, for example), but it’s not necessary and verifiable claims can be shared via a myriad of other mechanisms like API calls. So why do we spend enormous amounts of resources on trying to merge authentication and data sharing?


Because identity and credentials aren’t the same. A credential tells a system that “I have the correct secret” and an identity tells it “this is who I am”.

Who you are is important in understanding what you can and can’t do in a system. For example, an identity might say “this person is a US citizen” and then all systems that can’t allow US citizens will respond appropriately and block your access.

Credentials are often shared, identities should never be shared. Credentials will often grant you access to a specific identity.

The issue with DID is that there isn’t a way to differentiate whether a credential holder matches the identity those credentials grant access to. The nice thing is that it allows for a set of attestations to be associated with your identity that allow you to essentially be verified once instead of per-system. You could have one financial company go through the KYC process for you and others could leverage their attestations about who you are.

You also don’t have to give control over your identity to a third party like Google, Apple, Meta, or Microsoft. I appreciate that since I’m leaving the Apple ecosystem and have way too many accounts set up which use Apple for login. It’s not fun to try and change that and recover my access to those sites.


You've described the identity of a person, which is not the only or the most common use for credentials.

Consider a public/private keypair, where the private key is the credential, the public key is the "identity", but that public key can grant access to any number of accounts despite not being anything other than a randomly generated key.


You too?

I mean, the data are immediately stale! And there is no persistent subscription to update the data to comply with GDPR. It's crazy.


What I get from your explanation: it’s a new way to deal with cases where one proves ownership of something else (but in a more indirect way than exposing or accessing the thing itself)

That reminded me of proof of domain ownership, where we point a registrar to an arbitrary field in our domain records to prove it’s ours.

Sadly I also don’t have enough imagination to see any practical use for that. It feels like an abstraction above every other systems we have today, except any provider would still need to support both this and the underlying actual protocol…


> It feels like an abstraction above every other systems we have today, except any provider would still need to support both this and the underlying actual protocol

It is an abstraction. As to the burden of 'providers', when one reads about the principles of DID design one finds a deliberate bias toward the identity side (person, thing, etc.): All the subject needs is their DID and to (somehow) verify that it represents their identity. Whatever backend providers inflict upon themselves to deal with that is their problem. This mentality is intended to make coping with identity easy for the 'I' (identity) party.


What problem does adding a common “did:” prefix to these identifiers solve over plain old URIs?

Or perhaps my question is more general. What problem does this particular common standard address that individually standardised uri-schemes would not?


What advantages does DID have to something like OpenID? Which is actually decentralised without a central registry.


This is OpenID's privacy policy: https://openid.net/foundation/members/privacy_policy

They will turn your information to the authority when requested. How is that decentralised?


It's decentralised because you can pick up any other provider or even host your own (which I did, until OpenID declined from popularity and wasn't accepted anywhere any more).


OpenID is a standard. Openid.net is one provider not the only provider. You are free to use another provider.


And so it is as clear as is the Summer's sun.


The method part is only necessary in the case of anonymous, or untrusted, access. If the distributed access points are trusted, known to each other and verifiable, the method part becomes irrelevant because it can be addressed in the body of the message and the willingness of the distant end to process, or not, such message.

As somebody interested in solving for decentralization and previously unaware of the W3C DID effort I formed my own proven solution to this problem 3 years ago. This was one of the less challenging and trivial aspects of the problem space.


I'm gonna bet just from the requirements list quoted in that post that the implementations are supposed to be blockchains.


Spot on. The list at https://www.w3.org/TR/did-spec-registries/#did-methods tells you everyone who hopes to cash in.

Basically it's like a urn, but every sketchy blockchain startup gets their own namespace.


I think it speaks extreme volumes that the "methods" of "did" and "com" were both proposed by no-name crypto organizations; "cosmos" seems to be proposed by one guy with a template website maybe unrelated to the relatively major Cosmos blockchain (they're fighting amongst themselves lol); "ens" was proposed by some organization with no website; "evan" was picked up by literally some guy named Evan.

Its not just that they're crypto related; its not even the major players in the crypto space. There are physical organizations you can point to in the crypto space, and if they hopped in and said "yeah this is cool" then that's something. 80% of these organizations have nothing to them. They look like they were formed overnight as this proposal was going through its development.

The only seemingly legitimate proposal in that list, not an obvious planting-my-flag-in-the-ground, is: Baidu proposes "ccp". Lol. Looks like Workday is there proposing "work" as well, that's... something.

I need to read up more on the spec, but that list alone is an extreme embarrassment for W3. Google & Mozilla bring up a fantastic point: What are these things actually going to be used for, not hypothetically, in reality, what is the use case? Its very clear that the Web2 players don't have a good answer for that, and even the successful Web3 players don't either. Is the best response W3 has actually a bunch of nobodies who see something that kind of resembles (but not really) a new DNS, and want to plant their flag in case it gets big?


When we were designing the EU DCC we explicitly choose not to go down the path of the then-nascent w3c verifiable credentials using DIDs. There was a hard lobby from the blockchain bullshitters but they couldn't explain things without a lot of hand waving and technobabble so :)

There's a reason why our spec (EU-DCC) is a global standard: we build on proven technology, we're explicit, and we're not open to allow anyone and their dog to become a "trust anchor". That trust is earned, there are standards you need to follow and keep following to earn that.

DIDs are a wild west and clearly a not-too-unsubtle attempt by the crypto bros to sneak a cash-cow into standards.

The spec reads like it's written by a group of hyperactive kids in a sugar rush.


> we're not open to allow anyone and their dog to become a "trust anchor"

Or, to put it more simply, you designed another centralised identity system.


Kudos, and thank you.


Don't forget the Korean Ministry of the Interior, who are apparently using a two-line Markdown file as their website and a random Gmail address as their only method of contact.

For an identity verification standard, you'd think they'd demand the authors have more verifiable identities.


I understand that the list presented there is more-so early stage proposals; its not like they've been registered to manage that DID method.

That being said; it speaks some amount to the professionalism of the authors and supporters of this spec. The sane people ask: what are the real, tangible use-cases? There's no answer. Ok, well short of that: are there are least real, tangible organizations who will be building on top of this?

Not only is the answer weak, but the meeting notes from the DID-WG indicate a high level aversion to any known, named authority participating in a significant capacity [1]. They were rather concerned about Mastercard's proposed "id" DID method, for privacy/centralization reasons, maybe those are valid but...

> Markus Sabadello: … Even if we don’t apply it, since in the past we haven’t, even then I think this registration should not be accepted as-is, because it’s incomplete..

> Manu Sporny: Just about every DID Method is incomplete… not a good criteria..

It really comes off as a bunch of people who are mad at the centralization of big tech, want to change it, but lack focus & expertise on how to implement that change. And they managed to drag W3C/TBL down to their level.

[1] https://www.w3.org/2019/did-wg/Meetings/Minutes/2022-01-11-d...


If you have arguments against DIDs, then raise them here for rational discussion from all POVs, which is something HN is great at.

'"evan" was picked up by literally some guy named Evan' is not an argument. It's also not factual. The Evan DID method spec [1] was written by Sebastian Wolfram and Philip Kaiser. It is for the Evan Network, which is a blockchain attempting to provide a usable decentralized market infrastructure.

"that list alone is an extreme embarrassment for W3". First, a minor point. w3.org is the domain name, which has some minor cachet because it is a two letter domain. The organization is the World Wide Web Consortium, W3C. Second, hyperbole is present on HN at times, but should be avoided whenever possible because it raises the SNR for your comment, the thread, and HN overall. You could have phrased it without the hyperbole as "the status and standing of the objectors may be an impediment to future DID adoption". More accurate, and signal rather than noise. Of course, if you did any significant standards work you likely would know that objections like these are just as often motivated by market interests. I remember asking an OASIS long-time member about a rep on a standards committee I was on. The rep was from one big company whose name is a household word, and he seemed to do nothing but raise objections and build roadblocks. The long-time member told me he was known as that company's hatchet man. When they wanted to kill a spec before it got voted on they'd send him to join the TC and torpedo it from within. The work quality of the TC did not matter in what he raised, only his company's market drivers.

"I need to read up more on the spec". I would suggest you follow my approach: I do not criticize something I have not read end to end twice. The Evan statement above makes me think you didn't even read it once.

[1] https://github.com/evannetwork/evan.network-DID-method-speci...


> It is for the Evan Network, which is a blockchain attempting to provide a usable decentralized market infrastructure.

How's that going for them?


Ok; that's my mistake on the Evan one.

But here's my counter: when it comes to web standards like this, I am fully prepared and willing to delegate my opinion to Mozilla (and a lesser but still positive degree, Google). The W3 (ok you want to be pedantic; W3C; talk about SNR) additionally has a ton of other extremely mature member organizations; Apple, Amazon, Meta, Microsoft, Cloudflare, if even one of these organizations had their name anywhere on this spec I'd give it the time of day. I work at one of them; I've worked for two in the past; I know the people, they're extremely smart and well-intentioned.

I don't agree with your statement that its worth discounting the professionalism and expertise of some member organizations just because you had one third-hand experience in an entirely unrelated organization of some no-name member roadblocking a proposal because of... market interests. Market interests?! Of course that would be a concern! Proposals need to serve the members of the org. The members of the org are, mostly, for-profit organizations! I'm blown away at the dissonance it takes to complain about non-professional SNR, then immediately follow-up with hearsay and supposition.

But, ok, maybe not Web2 Big Tech. Maybe Web3 big tech? Where's the Ethereum Foundation? They're a W3C member org! Block/Square? They're getting very deep into crypto right now; also a member org; silent. Coinbase? Just an exchange, but a member. I mean, the list keeps going on.

I've read the spec. I would not claim to fully understand it, but like Mozilla, it feels abstract and very short on even high-level use-cases. I also think TBL's response signals that's by-design; and I think that's a weak response because ultimately if the organizations who do develop tangible use-cases fly-in-the-night four years from now, the spec will become an unnecessary vestige of the web, like so many before it, while the organizations who actually put in the work and deliver value to Real Humans ignore it (or worse, are forced to keep the dying-but-not-dead vestige on life support) (its not the W3C/TBL who pays the six figure engineer salaries that maintain this shit, its their member orgs, and not even the ones who proposed and approved this).

I also feel, weakly but still prescient, that while the W3C is relatively egalitarian, we can't ignore the politics. This was GOOGLE and MOZILLA who raised concerns (not to mention one anonymous org). TBL can object, and W3C can set the spec, but at the end of the day it will become a vestige even if the people involved with this spec do their best to make it happen, if Big Tech isn't on-board. I'm not, then, asserting that fighting big tech is never worthwhile; I'm just asserting that the W3C probably isn't the best abstraction layer to fight the fight.

So yes: I will criticize. And I'll hyperbolize: the fact that the W3C has hundreds of member organizations, from implementors to thinkers to for-profits and non-profits, and they're willing to overrule real concerns from multiple established and respected members in-favor-of a grocery list of flag-planters, half of which DON'T EVEN HAVE LEGITIMATE WEBSITES, is an embarrassment.

But, fortunately, probably, one that everyone will soon forget about.


Apple, Amazon, Meta, Microsoft, Cloudflare, if even one of these organizations had their name anywhere on this spec I'd give it the time of day. I work at one of them; I've worked for two in the past; I know the people, they're extremely smart and well-intentioned.

Microsoft spent years helping to develop the ION DID method. https://techcommunity.microsoft.com/t5/identity-standards-bl...

Not to mention incorporating DIDs and Verifiable Credentials into Azure AD (which services 95% of the Fortune 1000): https://docs.microsoft.com/en-us/azure/active-directory/veri...

Block/Square? They're getting very deep into crypto right now; also a member org; silent.

This would be the exact opposite of silence: https://twitter.com/csuwildcat/status/1542598105044078595?t=...

Still feels like your intent is to rag on something you've taken little to no time to look into.


> evan" was picked up by literally some guy named Evan.

Easy to check and that appears to not be true at all. Authors : Sebastian Wolfram, Philip Kaiser


Fair; I just scanned the list and saw it was associated with a company named Evan GmbH; I didn't, and still don't, feel it was worthwhile to investigate further; but fine, they're a no-name crypto org with a template website promising to provide the technical and legal framework for the future of the decentralized market economy based on blockchain technology. I'm really excited about their promise that "Digital Identities turn goods into active and autonomous participants in business relationships." And Gartner named them a "Cool Business" in 2020, watch out for these guys I'm sure they'll still be around in four years.


> plant their flag in case it gets big?

You have just summarised crypto.


Do they explain anywhere whether the browser is expected to keep a bunch of multi-gigabyte blockchains on my disk?

Or, is it simply that Google, Mozilla and whoever else have to serve verification requests for their users?

Or is the whole joke in that none of this is figured out?


If I had to guess, I'd say it's the third one.

Given that Mozilla and Google have already publicly objected to this proposal, I don't expect them to implement it. The W3C's word is not the law; no one is obligated to implement every specification they put forth.


It's simply a URI standard for crypto signatures. It provides no function except an address to something else. That's why Google is asking for a few "working" integrations to prove the theory.

Because someone goes to implement it and figures out the standard is missing something they need critically, they can modify the standard before it becomes a 1.0 standard.


That's never been a requirement. There exists SPV to verify cryptographic signatures without downloading a blockchain.


Why are they calling these 'methods' ? These names are not verbs, they're nouns. Why not call them 'providers' or similar?


The document "Use Cases and Requirements for Decentralized Identifiers" [1] lays out the following summary of what they are trying to achieve:

"1. there should be no central issuing agency;

2. the identifier should be inherently persistent, not requiring the continued operation of an underlying organization;

3. it should be possible to prove control of the identifier cryptographically;

4. it should be possible to discover metadata about the identifier."

Additional capabilities got tacked on during discussions, and I think are handled in different specs, such as DID-Messaging, but at it's core the above are the primary requirements.

[1] https://www.w3.org/TR/did-use-cases/


(Disclaimer - I work in this space, but these words are my own).

My understanding is DIDs are a unique identifier. There's a few methods that can be used regarding the construction of the identifier. It could be a unique key (did:key- https://w3c-ccg.github.io/did-method-key/). It could be using web infrastructure (did:web - https://w3c-ccg.github.io/did-method-web/). It could be using blockchain infrastructure (did:ion).

Whatever it is, it becomes an identifier used to receive credentials and send messages to. For example, your digital wallet can have a DID which can be used to store credentials. Your digital wallet can have many DIDs which can be useful to avoid correlation of identities.

The credentials (and the identities they represent) themselves are normally bundled into things like Verifiable Credentials (https://www.w3.org/TR/vc-data-model/) which have to be issued to something - like a DID.


On second reading with that background knowledge, the crypto pedigree reveals itself: "decentralized", "distributed", "independently of any centralized registry", "distributed ledger", "non-registry based", etc...

It all makes sense now! It's yet another attempt at making Web 3.0 happen.

Sigh...


9 billion buzzwords for basically "url-like string that starts with a vendor prefix"


I'd argue that of those, "distributed ledger" is the only real red-flaggy one -- and even then, only because of its association with blockchain. I think when engineering web technologies, we should hope to find a lot of talk about decentralized, distributed stuff independent of central registries.


Yes.

There is did:peer: and did:git: don't know what issues some people here have with blockchain scams again.


It's common courtesy not to interject in a technical discussion with identity politics.


if there's was ever any mention of identity politics in jiggawatts' comment it has now been removed.


I think reflexco meant this as a joke….


No, this is what identity politics is, reducing technical matters to which camp you're in ("your post gives me crypto vibes, scam!!"). Correct me if I'm wrong but I believe this doesn't belong on HN.


No. Broad technical camps are not identity politics, and they're a necessary simplification for making judgements about the massive amount of technical novelty being produced today. I am not going to investigate each crypto scam presented to me for all of its technical details; there are too many of them. It's enough to know it's cryptocurrency-adjacent, and therefore, probably a scam.


Honestly pretty funny that you are so critical of identity politics yet so uninformed about what that term actually means


Why sigh?

Why are Bitcoin maxis and HN Web 2.0 people so intent on keeping everyone from advancing to the next phases of the Web?

Do you like centralized VC-funded “cloud”-hosted startups incubated in Silicon Valley that get gobbled up by big tech or dumped on the public? You like the extreme power inequality between those who run these systems and the public? You think the best our systems can do is extract rents at the behest of Wall Street? People who bought the shares at $100 dont want them to drop to $50 so Uber will take 50% of all drivers’ paychecks, while a decentralized autonomous network wouldn’t. Selling tokens is a one-time deal that makes the founders rich and then the network belongs to the participants.

What happened to the open source, hacker ethos? You know, counterculture, hacking on something, or at the very least not buying into the corporate morass? What happened to cypherpunks and people who wrote M$ and worked on Free Software alternatives to Big Tech?

Once upon a time America Online, Compuserve and Prodigy were today’s Google, Facebook etc. People left for the open, decentralized protocols like HTTP, as soon as good enough clients (browsers) appeared. Web 2.0 companies like FB or Google could have never even gotten started if they needed permission of AOL or MSN … the permissionless nature Web 1.0 made it possible.

Once upon a time, long distance calls cost $3 a minute. Then the decentralized file sharing network Kazaa guys made Skype, and it became so widespread that VOIP dropped the cost to zero. We can all videoconference now and the telcos are reduced to providing dumb pipes.

So why if Web 1.0 broke barriers and allowed anyone to write some HTML and serve via HTTP a website to the whole world … why is it sooooo terrible that in Web 3.0 people can write a smart contract and deploy it on some EVM compatible blockchain making the rules or payments instantly accessible to people around the world who control their own keys? Do you really think this won’t have any real applications?


Because if you actually look at what cryptocurrency is being used for, it’s overwhelmingly:

- Scams - Risky financial structures that we regulated out of existence because they were risky and unregulated - Money laundering

If there are real applications of the technology, they would’ve popped up by now.

Just look at the whole space of cryptocurrency lending. Regulations exist for good reason, we have stress tests https://www.federalreserve.gov/publications/large-bank-capit..., consumer protections, all kinds of safeguard so when you put money in a bank account you don’t have to worry that somebody’s gonna run with it.

Web 3.0 is a buzzword-filled collective hallucination. People keep shouting the buzzwords, but still nobody knows what it’s supposed to mean in any concrete way. NFTs are scams and still haven’t found a real use case that’s not a ponzi, crypto lending is 2008 capital structures but turned up to 11, DAOs are useless because courts and corporate governance are things that exists, I can go on and on.

If you have a concrete proposal of how this magical Web 3.0 future is supposed to be better I’m all ears, but where I’m standing it’s all ponzi, scams, shaky capital structuring, and criminal enterprises.


Love how you think your money is safe.

You are losing 10-15% a year from inflation. Nobody has to literally steal it when they can dilute it


Straight to criticizing something else. lol


Because you live in a privileged bubble.

You have no idea how serious the problems Bitcoin is trying to solve are.

One day you will be forced to understand it though. Hopefully it doesn't result in you losing all your savings, like it has happened to billions of people through history....


> "Do you really think this won’t have any real applications?"

So where are they? The consensus is denying it on principle but rather wondering what it's actually useful for. It's strange to see all the claims of opportunities and problems to solve, yet nothing seems to be produced.


Many of them are here, fully documented and explained to laypeople, with links to GitHub, feel free to take the open source software and use them: https://intercoin.org/applications

Is that not good enough for now? What more would you like?


I am philosophically opposed to "Web 3.0" because I am philosophically opposed to cryptocurrency and do not believe it decentralizes anything.

Regardless of the follies of Wall Street, blockchains and smart contracts do not fix them. There is no particular reason why a DAO would not also cut worker wages to benefit the tokenholder class - it's the exact same structure as a corporation, just without the pesky regulation getting in the way. There is nothing about decentralized finance that guarantees that the employees are also tokenholders anymore than regular ol' brick-and-mortar capitalism guarantees that employees are also shareholders. If you want a worker's cooperative, you can start one right now without needing to buy Ethereum and develop a fragile smart contract.

DeFi isn't destroying power structures, nor is it making them less rigid. It's just a changing of the guard, from corporations and investors to anonymous "whales" and DAOs. This isn't actually decentralizing anything, it's just obfuscating how much the system has been corrupted.

As for open source and Free Software, well... their political opinions outside of hacker ethos are all over the map. The space is vaguely libertarian and vaguely leftist, which means there's plenty of people in the space who don't want more unregulated capitalism.

Your example about long distance calls is also wrong. Or, at least, missing some context. Skype was actually kind of late to the "cheap long-distance" party; the government had already done the hard work of breaking up the phone monopoly and ensuring that companies could place and terminate calls on other people's networks. This is because "permissionless" is not a capability, it is a policy. Even ostensibly permissionless blockchains could effectively become permissioned if miners and exchanges colluded in a way that made economic sense. I know this can happen because it's exactly the same thing that happened in Web 2.0.


It might have something to do with all the cryptobro’s pushing get-rich-quick-schemes. Crypto-enthusiasts ignore how regulations existed for good reasons before regulatory capture made a mess of things. Not everyone likes the idea of an anarchocapitalist future.

And I’m not even talking about the outright scamming and the fact that most crypto’s primary use case is criminal. Or the environmental issues of spending energy we can’t spare on something we could solve so many other ways.


I personally take a hesitant approach to crypto/blockchain technology. I'm open to using it where it's legitimately better than other approaches, but for the vast, vast majority of applications traditional methods are always going to be better than shoe-horned decentralization.

It's very unfortunate that the grifters have given the technology such a bad name when, like any technology, it has applications it excels in and others it doesn't. We're still definitely in the phase of working out what, if anything, blockchain is better (than centralised implementations) for. And it sucks that that search is being negatively impacted by all the grifters.

In the future I wouldn't be surprised if we saw 99.99% of blockchain stuff dead, but the small percentage that survive could disrupt some industries (I'm not convinced finance is one of those industries though lol).


> like any technology, it has applications it excels in

Name five.


1. Permissionless, censorship-resistant global money transfer

2. Smart contracts

3. Append-only logs synchronised between mutually distrusting parties

4. Decentralised identities

5. Microtransactions for online games and to replace web advertising


1. Except cryptocurrencies aren't any good for that, because the transaction costs are too high, and the value of cryptocurrencies too volatile. Cryptocurrencies are not a medium of exchange.

2. Now, what's a valid use-case for a smart contract, and please explain how it functions if there's a bug in the contract?

3. Maybe. You'll need to provide a more concrete use-case. Also, you have the outside-world problem (you know the data hasn't been altered, but you don't necessarily know where it comes from).

4. All you need for decentralized identities is a public key. (Though if you want your identity to be long-lived, you'll have to also have a system of secure key rotation, and the most straightforward system is blockchain-ish in that it involves a signed append-only log. But it doesn't need a global trustless ledger.

5. See 1, except worse, because the transaction cost dwarf the actual payment.


1. If you're sending a portion of your monthly wages as a remittance to your family, spending a dollar[1] isn't too much.

2. A smart contract allows decentralised organisations to function, with democratic voting and transparency. (That's not appropriate or necessary for every organisation, but it can be an improvement on one person hosting a server and saying "Trust me"). If there's a bug in the contract, you have to vote to change the contract. Traditional contracts, businesses, and even countries fail all the time, but we haven't give up on them as concepts.

3. For a concrete use-case, I offer the example of blockchain technology being used to make the fishing industry supply chain more transparent.[3] It's true that someone could enter fake information onto the blockchain, but they could also fake signatures on paperwork, so a system can still be useful even if it doesn't prevent all possible attacks.

4. If the ledger isn't trustless, then someone is controlling it, so your identities aren't really decentralised.

5. There are better currencies than BTC if transaction costs are the main concern. The equivalent number for BCH is half a cent.[5]

[1] https://bitinfocharts.com/comparison/bitcoin-transactionfees...

[3] https://www.reutersevents.com/sustainability/using-blockchai...

[5] https://bitinfocharts.com/comparison/bitcoin%20cash-transact...


> A smart contract allows decentralised organisations to function, with democratic voting and transparency.

A smart contract is neither smart, nor a contract. It's a program, written in an esoteric language, and running in the world's most inefficient VM.

It's so bad and overcomplicated that "smart contract" authors themselves routinely make mistakes in code equivalent to the most basic of actual contracts. And since there's no avenue of recourse, these mistakes are irreversible.

"Smart contracts" also require the user to pay for any meaningful action.

As for "transparency", there's no transparency when something is enforced by code very few can read and understand (compared to actual contracts that can be read by humans).

As for "democracy", there's nothing democratic about "who has the most money has the most votes".

> Traditional contracts, businesses, and even countries fail all the time, but we haven't give up on them as concepts

Because we have thousands of years of history teaching us how to deal with those, and guess what, we've come up with multiple things like:

- regulations

- contract clauses dealing with failure

- avenues of recourse

- various methods of enforcement

Crypto bros pretend that these things are unnecessary, but then immediately turn to courts to sue scammers, or cry in cryptoforums when a "smart contract" bug wipes their wallets out.


Sorry, but you sound like tech skeptics in every generation ever, saying “the Dewey Decimal system works perfectly well, why do we need computers just to find a book”? (Yes, I have heard this exact objection raised by radio hosts to early computer pioneers who tried to explain why computers will become useful for regular people.)

Email became useful and replaced the post office

Web 1.0 became useful and replaced TV, radio, magazines

Web 2.0 became useful and allowed people to communicate but still hasn’t been truly decentralized

What makes you think that Web3 replacing trusted gatekeepers is not useful? You think “just trust me” is the best system we can possibly have for writing code that does some business logic?

For me it’s simple: if there is something that’s very valuable (some NFT, some role, some election, some large balance of USDT) then I prefer that my customers custody their own keys and deal with that themselves. Less liability for me. Rather than having a guy with keys to the database log in and potentially change the result of an election, and having to track down logs and deal with lawsuits etc. I just want smart contracts to deal with it, and each participant can only take the actions they are allowed to take - no exceptions. No central point of failure for security. No need for audits OF TRANSACTIONSby auditors who can also be corrupted.

How do we make sure that smart contracts are correct? Audits, battle testing and with Cardano we even have provable correctness. UniSwap likely has no exploitable bugs, for instance, or they would have been found. Every instance of UniSwap AMMs comes out of the same factory. THE END RESULT is far more reliable than any code that runs on only one machine by a “trust me” corp.

Sorry buddy, you can shill your centralized “trust me” all you want but you sound like Peter Schiff and his gold. You just don’t get it.

1. No liability for transactions, only for code

2. Open source infrastructure

3. No central entities who can corrupt the system in unlimited ways

4. People can only do what is allowed, no matter what

5. Code operates regardless of whether the central entity is around in 20-30 years

6. Different incentives (selling tokens is far more user-friendly than selling shares to a parasitic investor class that will cause you to extract rents forever and introduce dark patterns and lockin at the expense of the public).

7. Interoperability — on-chain data can be used for other smart contracts and any websites can read the data.

8. Global interoperability, no need to rely on a patchwork of currencies and money transmission legs and banks that Stripe takes care of for you. USDC is an ERC20 token and you write code, not connecting to a billion little APIs. Similarly to HTTP letting you go worldwide vs what Twilio had to do for you, or negotiating syndication by radio stations.

Of course I think blockchain is a first-gen technology but it enables this and a lot more !


Here's the problem: people don't care about even one of the eight things you listed there. None of these things matter to the common person, and they certainly don't matter to the preeminent payment infrastructure.

Nobody here is shilling for centralized services, most of us are veterans of decentralized tech giving you warnings. Many projects have encountered these same issues, and have died because they have no purpose. Blockchains are little more than nerd porn, the average banker isn't going to look a trustless infrastructure and all of the sudden "get it". That's one of many insurmountable problems that cryptocurrency faces, and it has been successfully blocking adoption of it in the real world for more than 10 years. You can't simply shrug off decades of decentralized failure without applying the lessons you learned from watching them fall. Unfortunately, every cryptocurrency I've found is tone-deaf to these concerns, and prefers to replace genuine conversation with marketing crap.


The only concrete use case you've offered “is unlikely to deliver substantial gains to the industry when compared to alternatives” such as shared databases, which don't require any costly consensus algorithm.

https://doi.org/10.1016/j.techsoc.2020.101298


>1. Permissionless, censorship-resistant global money transfer

>5. Microtransactions for online games and to replace web advertising

how money transfer and microtransactions are different?


They are (at least) two separate use cases, even though they are both examples of sending money. (You could equally say that they are all examples of sending data).

1. Some people want to be able to send large amounts of money internationally to their family in a country which has currency controls and "official" exchange rates. Others want to be able to send funds to organisations that have been banned by traditional money transmitters, such as Wikileaks, or protest groups, or adult content, or cannabis.

5. Separate groups of people don't have a problem with their government's fiscal or censorship policies, but simply want to be able to buy an emote or a skin in an online game, or to listen to a piece of music or read an article without being tracked around the web or needing to wire 50 cents from their bank in Mongolia to the service provider's bank in Cyprus.


1. The problem there is exactly why the space is going to remain a reserve for fundamentally illegal activity. Arguably it shouldn't be. I get that. That still doesn't get me any closer to me suggesting anyone's grandma hop into Web3.

5. So you're still being tracked, because there isn't a company around that isn't monetizing viewership data. Also, if you're fine with fiscal policies, why are you hesitant to wire? Sounds to me like you're dissatisfied with your host country's fiscal controls, or service provider's offerings.

Look, control over financial networks is one of the most powerful soft control mechanisms on the planet. You will not work around that. Government is slow to catch up, but I assure you, these folks aren't stupid anywhere close to 100% of the time. The fact regulation is crystalizing around crypto as fast as it is without taking the multi-century learning experience trad-fi did is evidence enough of that.

If it comes down to "a bunch of nerds created an unregulable financial system" I can pretty much guarantee it'll get gobbled by trad-fi snd re-centralized.

In fact, anyone could roll their own financial networks without using banks/Visa/you name it. No one has because we've made laws that specifically increase the barrier to entry because finance is the spine that provides support for all manner of economic activity, which includes the illegal stuff, and Government is putatively in the business of making sure that the illegal stuff doesn't see the light of day.

I just do not see the compelling argument that'll carry weight to switch someone from "financial system that makes crime hard" to "financial system that makes crime easy" and feel alright about it. You have to already accept that crime is just an endemic human phenomena, and this is just a rebalancing of the spectrum.

Given you've got much more efficient implementations of your other use cases available, this is the sticking point for me. No people I've spoken to and laid out what Web3 really is, even with the most charitable framing gets passed that.

If I can't convince people it's a good idea with full disclosure in effect, I'm not sure it's something worth pushing forward.


I don't really think there are five applications it excels in, but there might be in the future..?

Like I said, we're still working out what it's good for. I've seen promising applications but nothing I'd say is obviously better than traditional technology. To dismiss the entire technology because of (admittedly a lot of) grifters is premature in my opinion.


You said it has applications it excels at. Turns out "maybe in the future" and "we're still figuring out".

So. No applications it excels at.

> To dismiss the entire technology because of (admittedly a lot of) grifters is premature in my opinion.

It's been 13 years. When will dismissing it become mature? In another 13 years?

Or maybe the tech is just bad.


So Web2 has gotten such a bad name due to centralized bullshit that entire democracies are up in arms. And the Zucks of the world just say “Calm down. Breathe. We hear you” and proceed to continue to do the very thing people have been mad about (Libra? Beacon?)

https://en.wikipedia.org/wiki/Facebook_Beacon

Zuck correctly described the situation early on: “I don’t know. They ‘trust me’. Dumb f#%ks”. And it’s still true today and you want to bury any alternative to that system.


>Crypto-enthusiasts ignore how regulations existed for good reasons before regulatory capture made a mess of things.

What way do you propose out of regulatory capture?


Transparency laws, stricter regulation on moving between regulatory agencies and regulated industries, in general, better democratic accountability. All of these are imperfect, but it's also the case that cryptocurrency doesn't make any of it better.


>stricter regulation on moving between regulatory agencies and regulated industries

You seem to assume the existence of a competent and non-corrupt metaregulator (some form of supervisory body that would "regulate the regulators" and somehow prevent "revolving door" scenarios).

- If it exists, why was long-term regulatory capture possible in the first place?

- If it doesn't, how would we go about instituting one?

We're on Hacker News. Exit wishful thinking, enter systems thinking.

- Feedback between regulatory agency and regulated industry: continuous.

- Feedback between regulatory agency and supervisory body: continuous.

- Feedback between supervisory body and sovereign (=the general public getting shafted by the regulatory capture): discrete, and of appalingly low resolution.

I'm told that in the world's dominant democracy, where most of the ideas that we're discussing originate, the sovereign is throttled to expressing its interest in the form of a binary decision once every ~35000 hours.

So, the boffins at the revolving door email each other and call each other on the phone all the time, but the public can only talk to the legislature at the grand rate of 1 bit per 4 years? In that case, I'm prone to applying the concept of "regulatory capture" to any and all regulation that nominally serve the public interest. They simply don't have the bandwidth to establish what the public interest is.

Even at Bitcoin's "low" speed of 7 transactions per second, on-chain voting would still support a much faster democratic process. That's why people are opposed to it. For now, people use cryptos to vote mostly on inconsequential things. That's while the quirks are being ironed out. Some crypto bros who got in for teh gainz got shafted. So what. Maybe in a fairer economic system a fool and his money would be parted even more easily.

Currently, crypto does not work... except as a public "exit"/"no confidence" vote towards the methods through which industry is organized and regulation is instituted. For one to devote time and effort to this emergent form of economic organization, no matter how uncertain its realities might be, is simply to refuse to take part in maintaining a status quo that one has had no part in establishing - and to look for alternatives, no matter how tenuous.


I mean.. as I understand it, you read specs to understand something and as I kept reading it, I have absolutely zero idea what it is or even supposed to be. What is a problem it is trying to solve? I dislike it, because I immediately assume it cannot be good for me.


I have a similar background, and I also know some of the people active in the DID community, and I spent a couple of years trying to get them to explain to me what problem it solved or show me a working application using the tech.

My take is that it is a) X.509 re-born with different encoding (JSON-LD vs BER or PEM) and b) a scheme to promote use of certain blockchains for a purpose that blockchains don't suit well.


Azure Active Directory is on its way to use DIDs [0]

The forces in place here seems to be:

- distributed ledgers allow a different (decentralized) paradigm for identity management, where users own their identities and service providers authorize and authenticate them through verifiable credentials

- years of blockchains and even more years of web certificates have created processes to handle cryptographic material, that service providers supposedly find more secure than "username and password" to manage the identities issuing the verifiable credentials

- in realpolitik, Microsoft (Azure) is expanding in the cloud market by trying to establish a presence in niches (ie: Intel SGX, DIDs) [1]

I understand the overall skepticism about blockchain related technologies, but the intrinsic advantages that I see in them are:

- (for a service provider) having a tamper-proof log of all the auth changes for an identity

- (for a service provider/user) relying on cryptographic signatures allows for a private validation of an identity/claim

- (for a user) provided this is not EEE allover again, a greater degree of choices on how to manage your identity

I do not have as much experience as you do, so maybe there is some wheel-reinventing that I am not aware of :)

0. https://docs.microsoft.com/en-us/azure/active-directory/veri...

1. https://techcommunity.microsoft.com/t5/identity-standards-bl...


As far as I understand it (from skimming through a couple of docs and presentations), DIDs are similar to specs for assertions and/or attributes which are stored in a blockchain which functions as federation metadata datastore and IdP at the same time.


Conceptually, those solutions that you’ve worked with are about account principals and access management. When you deployed a smartcard, the human identity of the person you were assigning an account principal to was established offline, ultimately linked to proof of birth and residence.

Typically your company will validate those credentials to some level for employees. At a minimum, you establish what you need to know for payroll, in other cases you do extensive background investigations. For the public, however, we’re stuck with rudimentary solutions for ID verification (bank/credit accounts, mailing letters, etc) or unreliable and invasive solutions like ID.me.

The idea of things like DID and sovereign identity is that the human has agency and can provide or not provide credentials to establish who they are. That could include a verifiable, signed representation of your birth certificate, a professional license or some other credential. Think of it as a new iteration of 90s “web of trust” concepts.



It's a standardized version of Keybase IDs (which can also support non key based identity if you choose)


Poor Keybase...


I'm currently reading Self-Sovereign Identity.

Seemed to me that DIDs are a more general version of blockchain addresses.

Like, you create a DID from a public key, and everyone who handles DID related stuff can ensure only who controls the related private key is the real owner.


Identity is nothing more than a contextual set of memories and attributes.


The New Zealand government used DID for their vaccine passport. (Scan to be allowed to enter a building)

They removed the core part of the DID standard where they made it expire after 6months.

I was told they requested the DID standard as it was needed for future projects.


> Something that should be a bit of a warning flag is that I have two decades of identity-related experience but I still have no idea what DID even is.

I'm not sure this is the "flex" you wanted it to be. A cursory look at the specification gave me a pretty good idea what DID are supposed to be, and for (and I would only say I know enough identity-related stuff in order to implement things in my own services, but not over two decades). The use cases are relatively easy to understand, and there is bunch of implementations in the wild as well.

Maybe it would also help by looking at some of the proposed DID methods that are more similar to the approach you're used to. While not centralized, maybe DNS is something you're more familiar with, so you can link it together with existing knowledge?

In that case, the specification for the `did:dns` method, using DID together with DNS might be helpful for you: https://danubetech.github.io/did-method-dns/

What exactly is it you don't understand? Maybe your knowledge about centralized identity management is not helping you in this case, but making it harder to understand.


Reading that DNS spec and all I have to say is that I fucking hope I retire before I have to learn this bullshit.


"Mozilla:

    The DID Core spec has not demonstrated any degree of practical interoperability, instead delegating that to a registry of 50+ methods.

    The DID architectural approach appears to encourage divergence rather than convergence & interoperability. The presence of 50+ entries in the registry, without any actual interoperability, seems to imply that there are greater incentives to introduce a new method, than to attempt to interoperate with any one of a number of growing existing methods.

    The lack of restrictions on the registry are allowing methods diametrically opposed to the principles of the group & spec, and methods which are actively globally harmful to sustainability.

    [W]e believe the DID specification may not be fixable (MUST NOT become a Recommendation)."
"The Director concludes that the balance lies in favor of the DID developer community, encouraging it to continue its work and search for consensus on standard DID methods. The objections are overruled."



For those of us who aren't webdevs, what was the final fate of OAuth 2.0?


OAuth 2.0 basically killed generic identity providers leaving us with a hand full of SSO providers you can use.

It also instead of a simple generic OAuth 2.0 library we had (and still have) separate libraries for the various OAuth SSO providers. Through this has converged a bit since the initial days. But the initial days where enough to cause harm to the ecosystem.

It also needs a variety of "extensions" you have to add to make it secure. But which can slightly differ between SSO providers. (Note sure but I thing some of this "extensions" have been added to the spec retrospectively.)

In conclusion I would say while OAuth 2.0 is widely used it also was widely harmful and lead to a further centralization and to users being more dependent on a few mega corporations.

In this context it has fully failed some of the initial ideas people had about it when it's design started.

Just because something is widely used doesn't mean it's not harmful or well designed. Adaptation of technology is often not driven by what is the technological best solution especially wrt. web technology.


You may have heard the aphorism, "All problems in computer science can be solved by another level of indirection." This, or some variation of it, is known as the fundamental theorem of software engineering, variously attributed to Andrew Koenig, Butler Lampson, and David J. Wheeler.

With oauth2, literally any sort of authorization (or, in theory, authentication) is possible, but first you have to ask some endpoint for the details. In the case of oauth2, the core concept for authorization is "scope", but nothing is prescribed in scopes. They can literally be anything the auth provider describes. In theory, it's not supposed to matter – a consumer is just supposed to be able to pass around scopes from providers and let them determine if access is allowed. In practice, there's no practical way to reason about scope+resource permssions.

Oh, and despite the name, oauth2 is not about authentication, it's only about authorization. The OIDC spec, built on oauth2, provides authentication services, but again, there's no spec. Every auth provider does what it wants.

It's sometimes said that oauth2 is great for consultants, because any organization wanting to deal with it must hire (or contract with) specialist who can sort out the ill-defined problem space.


> despite the name, oauth2 is not about authentication, it's only about authorization

OAuth is short for Authorization in the first place.

https://datatracker.ietf.org/doc/html/rfc6749


What a brilliant and unambiguous abbreviation!


Because this is my specialty, I long ago learned to specify either authn or authz. The OAuth spec should have been the OAuthz spec.


Agree, or could be OAF just like the way GNAP named itself.

Btw, I'm relatively new to this field. Didn't the terms like authn/authz exist at the time they named OAuth? Or any other reason not to use them?


It's very widely used. Most SSO providers are using it, including the biggest ones.


Although personally i don't think its a great spec. Its a good enough spec (certainly better than saml, shudder) - "good" is not the same as works acceptably or popular.


After extensive experience with SAML and other specs, I don't think it's better than SAML in a fundamental way. It's certainly better in that it doesn't require a mind-numbingly verbose blob of XML, but strip away the XML and all that verbosity, and you basically end up with... kerberos.


A standard flexible enough where you can do literally anything is usually a bad standard. The point of standards is to write up some small-ish base that everyone can agree on so that people can talk to each other. A standard containing everything where each implementation implements a different incompatible subset, is a failure.


Also, too much flexibility ends up being a security nightmare.

This building so much flexibility into protocols seems like a 90s holdover.

We are realizing that the more moving parts you have, the more edge cases you have, and the more attack surface area.


I guess we'll find out if they have learned anything since the XML Signature specification. That was an adventure, trying to find a subset that actually did what it said it did.


XMLSignature is one of the worst security standards i have ever read.

Do we sign the bytes of the document? No, we canonicalize it first? How do we canonicalize? Multiple ways. Do all documents with the same canonicalization have the same DOM? No. Which part of the document do we sign? Up to you.

Its a wonder there aren't more major saml breaches.


The biggest problem is that getElementByID doesn’t fucking work.

And then if you sign sibling documents, you essentially have most of the same problems you have with ensuring a zip file doesn’t have a malicious payload, because file cannonicalization is fraught with issues.

It took me a couple pages of don’ts to nail it down, and I missed a big one that I didn’t see until pretty late in the project.


3 sentences on why usb type-c/usb3/usb4/thunderbolt are a failure.


Those at least have engineering & financial constraints driving them. Like you can't mandate everything USB-C can provide 100W of power delivery, it's a combination of just not possible and just not economical (eg, your phone would never be a USB PD source as a result, which is a handy feature to have today for charging headphones & the like)

Also, those specs all have a degree of interoperability, which is a pretty significant difference. Like many thunderbolt 3 docks can actually also just run when connected to usb-c w/ dp-alt mode, although they lose out on the thunderbolt extras of course. Something that seems to be missing here.


And despite all the mess they are selling well.


You don't buy USB, you buy a device that has USB. If you do buy USB (e.g. for extension or multiple ports) it's also because of your devices, not because you care about USB specifically.


Sounds like you're describing ActivityPub


I've been following DID for a while and I really don't think its the right approach. The voices of concern from Mozilla and Google are spot on: the DID specs expect everyone to coordinate on finding the right structure for different types of data but the real world is messy and no "correct" structure exists.

DID in my opinion is unlikely to succeed. Real builders don't use it, because it is cumbersome and requires agreeing with (a step up from collaborating with) other opinionated developers who have different specific use cases in mind.


> but the real world is messy and no "correct" structure exists.

Then why would we expect Mozilla or Google or anyone else on that committee to ever determine one? Or to object to a free-for-all naming scheme if the problem is inherently ad-hoc?

Whether or not this standard succeeds, one can see the very real threat of standard capture here. Which has happened before, on numerous occasions, when a canonical implementation arrives that just so happens to prioritize their interests.


> DID in my opinion is unlikely to succeed.

Probably a few big tech companies will form a consortium and use their weight for setting a de-facto standard.


Or they won't use it and will be confined to a long tail of use cases?

I mean, not all of the Internet is accessed through a browser but if Chrome and Firefox don't support DIDs, even if Safari would, who would use DIDs in a context exposed to browsers?


Apple objected too I think.


Putting the centralized in decentralized.


Probably so. Centralized isn't always a bad word as you can see from the tire fire that is crypto.


Most crypto is fairly centralised, ironically.


Maybe usage will converge on a few dominant methods (be it did:key, did:web or some other one), based on successful applications. This is pretty similar to URIs, which were defined very openly, and where for instance http(s):// took over gopher:// and ftp://


> This is pretty similar to URIs, which were defined very openly, and where for instance http(s):// took over gopher:// and ftp://

Urls were defined way after all those things, and were predominately created by the http people. I don't think its similar at all, and regardless, compared to the actual http (or gopher or ftp) protocol, the url syntax is the least interesting part.


A better example perhaps is `www` or `mail` or `news`. Or magnet links. Or JWT on top of JSON.

In the era of The Information Superhighway, the World Wide Web, and spinning Netscape comets, people were quite ok standardizing on `www` as a prefix that used dns to route your web traffic to the web server. Meanwhile, AOL had a concrete use case of "keywords" that monopolized tv ad time for half a decade and went nowhere.

The web is full of standards that exist on top of other standards. This standard allows people to name things. People will figure out how to make interesting use cases with names without a central authority finding pre-existing use cases.


What exactly are you saying? That DID is as useful as an optional naming convention for server names that some people liked to use in the 90s?

If that is the comparable case, what is the value add of w3c standardizing this?

Personally this standard sounds closer to SOAP/WSDL/etc to me.


As a user, I am very happy that W3C has overruled the objections. As a developer, it may a bit of a PITA, albeit a necessary one.

For Google, it makes sense for them to request at least some "standard" methods. If the number of DID methods is sufficiently large, Google won't be able to use their network effect to dominate any of them. Surprise, that's the aim of the spec.

For Mozilla, it makes sense to support a small set of DIDs, their resources are not infinite.

As a user, I want to use the DID method that works for ME. For example, https://en.wikipedia.org/wiki/BankID is used universally in Scandinavia and I would not see why would anyone use DID for identifiers of "real world" things if a govenrment-accepted mechanism cannot be used (eg "did:bankid:*") for signing those identifiers etc. Because I already use BankID for all important authn things in my daily life. In Estonia, ID cards are used for legally binding signatures since forever, I can totally see how they might want to use that to sign their DIDs: https://en.wikipedia.org/wiki/Digital_signature_in_Estonia

Regarding Web 3.0 garbage in the registry: just ignore it, nobody is going to use it seriously. Those entries are just marketing by those projects. If it was up to me, I would split the registry into two sections: registries with significant stakeholder backing (BankID and the likes) and everyone else (so that you can ignore them).


> "As a user, I am very happy that W3C has overruled the objections. As a developer, it may a bit of a PITA, albeit a necessary one."

It's not immediately clear what DIDs are, what problems they're solving (and what value they're providing to the ecosystem), why they're better than other options in the same space, and how they'll function and scale years into the future. That's a legitimate cause for skepticism.

That the objections brought on by the two largest browser vendors are dismissed entirely, without further addressing the concerns stated, is unsettling.


Is that what that rambling was about after the two statements from Google/Moz?

I thought it was just someone crying about how the big browsers just didnt understand their brilliance.

Overruling them just makes you look stupid. They won't implement it. W3C pulls this bullshit all the time.

Fighting for relevance.


I can't find BankID in the list of methods, I suppose you are saying that it could be?

Some other interesting entries in this same vein, that are already in the DID registry, are Mastercard (https://idservice.com) and SecureKey (https://securekey.com).


What is the Web 3.0 garbage? I though DID needed some sort of blockchain like Bitcoin.


This is discussed in the intro of the spec: https://www.w3.org/TR/did-core/#introduction. My understanding is that the DID method can use a "centralized" identity provider like the Mastercard Identity Service (https://github.com/Mastercard/did-methods/blob/master/id.md), which is presumably backed by some boring database, a decentralized-but-not-blockchain one like IPID which uses IPFS to store and lookup identities (https://did-ipid.github.io/ipid-did-method), or blockchain-specific ones like ENS.

> [Verifiable data registries] include distributed ledgers, decentralized file systems, databases of any kind, peer-to-peer networks, and other forms of trusted data storage.


The word "blockchain" is mentioned only once in the spec, and only in one of the 12 use cases.

The garbage part is that 9 of 10 (give or take) methods registered on https://www.w3.org/TR/did-spec-registries/#did-methods are from crypto-hyper-cosmos-nonsense.

The reason is that serious organizations like MasterCard or BankID will only begin considering registering a method once the spec is a W3C Recommendation. At this stage, only insignificantly small or heavily invested parties registered a method on a draft standard (which is what a "proposed recommendation" roughly means).

Edit: nope, I was wrong, Mastercard has one https://github.com/Mastercard/did-methods/blob/master/id.md


The only decentralized one is the one that uses Bitcoin. No Bitcoin no real DID.

Decentralization matters.


Please don’t hijack the term decentralization to solely mean blockchain &co. What you are describing is a single system that uses technical means to decentralize (quite cleverly so).

The classical D. is achieved by not having a single system like Blockchain but instead having as many as you wish - anyone can install nginx on their server. This results in a multitude of systems achieved through policy means. The spec means decentralization in this way - Sweden can rely on some DID methods, UK on other, and you can use Bitcoin :)

You may argue that the latter is flawed but please don’t make it sound like the former is the only way.


All methods that don't rely on Bitcoin are not really decentralized.

All other systems can get decisions reversed and rules changed without much trouble. Try doing that in Bitcoin :)


Isn't BankID only used in Sweden? At least I have not heard that it is used in Norway, Finland or Denmark.


Sweden and Norway both use BankID and I believe they are compatible. Finland and Denmark have very similar systems.

Edit: Norway and Sweden both use a system called "BankID" that does the same thing in a compatible way, but they seem to be developed and managed by two separate companies.


Having lived in both Sweden and Norway, I can say for sure that it's used in both places. I'm not sure though if I could've used my Swedish bankid to log into a Norwegian bank.


BankID is definitely a thing in Norway. Besides being used by banks here, I can use it to submit my taxes, view my prescriptions and lots of other things.


Finland has something similar: i can use my bank account to log in to places to see my medical info for example.


So if Google, Apple, and Mozilla all opposed this what are the chances it ever actually becomes useful?

Just because something was given the stamp of approval by the W3C doesn’t mean they actually have to implement it.


That use case chart is kind of interesting.

Like, it's not a flat URI scheme leaving structure up to controllers. But it's also not a fully defined ontology.

Pick a use case description in section three. Any one. Each one could provoke an entire domain-specific organization into heated arguments about subrequirements and subsubrequirements.

Even if this recommendation doesn't make it, it's neat.

Any way you turn you walk into this system, right?


IETF recommendations have to have a working implementation. A paper standard with no working implementation is just onanism, whoever promulgates it.

I think the W3C got lost in their XML dreams, and got huffed by WHATWG, whose standards I actively dislike.


I may be suffering from a deficiency of reading comprehension. Can someone please explain to me in plain terms what a DID is and what it's for? It's a "globally unique persistent identifier that does not require a centralized registration authority"[1] - great, an identifier for what exactly? Is it just supposed to be an identifier for anything at all? Local and remote resources? People? Pokemon cards?

[1] https://www.w3.org/TR/did-core/#terminology


It's an attempt to put a "standards-compliant" veneer of legitimacy on "Web 3.0" blockchain nonsense. The list of supported methods at https://www.w3.org/TR/did-spec-registries/#did-methods should make clear who this is really for.


I think this is an unfair generalization. The core of the DID spec was developed by the folks at the Internet Identity Workshop, who have been discussing identity far longer than blockchain even existed.

Some of their members (e.g. Sovrin, Evernym) only very reluctantly included blockchain as a piece of the solution, and only then when they saw how certain aspects of blockchain were desirable for credentials and revocations without a central, controlling cabal.

Using Sovrin as an example, they did not create a token sale, NFT, or defi anything, which I think speaks volumes about their motives. Instead, they created a non-profit that was funded through traditional means.

IMO some of these founding members are legitimately trying to solve the very difficult problem of digital identity (including privacy, decentralization, and compatibility).


Here's a real world example of an identifier for a meme I just made using one of the linked sites in that table: https://didme.me/did:meme:1zgs0ue8me4gt3ls5uvq87n7v7ykhrvr50...

On the one hand, it seems like this is going to be (and/or is currently being) used by NFT peddlers, but on the other hand it's not like we'll be seeing less NFTs and cryptocurrency garbage if this spec dies.

However, I also can't think of any interesting use-cases for this that don't involve cryptocurrency/NFT garbage. Some of the example use-cases seem range from being consumer rights and privacy nightmares, to being just uninteresting ideas that can be implemented without DID.

The Transferable Skills Credentials[1] case for example seems to be trying to make a case for adding an NFT/minting schemes as a middle-man for certification programs. What value could that possibly add? Certification programs are centralized by design, so the only "gain" would be that the certification authority doesn't need to maintain a database of the certifications it gives out...or something like that?

Also the "Cross-platform User-driven Sharing"[2] one reads like some sovereign citizen wet dream (in semi-broken English). Franklyn is a military war veteran with two young daughters and he is very concerned about protecting their privacy online, so he writes HIS OWN terms of service that companies need to agree with if they want to do business with him. After a long probationary period, he decides to share more information with services (like a shopping list!), but he keeps his finger on the trigger at all times (the delete my DID button) in case the company gets any ideas.

I didn't realize the W3C had become such a clown show.

1: https://w3c.github.io/did-use-cases/#vcEcosystem

2: https://w3c.github.io/did-use-cases/#crossPlatform


Baidu's did is literally "ccp:" ...


See https://w3c.github.io/did-use-cases/#onlineShopper

It's blockchain bullshit that you can ignore.


OMG, their example is pretty much an NFT.


Decentralization is a legitimate and important topic that is confused by the hype-driven blockchain bandwagon: a flock of a thousand red herrings. When searching for actual foundations to build a DApp on, you typically find libraries published by Crypto-startup-of-the-Week LTD. Why should I trust them?

Regardless of the legitimate issues raised by objectors, I am happy to have some W3C standard to build on. Methods may be underdefined, but as consensus is reached I could pivot to it. And even if no consensus is ever reached, I can at least try and reach consensus with the communities I care about.

Coupled with ActivityPub [1], this feels like a much safer foundation for building a DApp than anything else I have been able to find. The only thing that confuses me is the relationship between DID [2] and the Verifiable Credentials [3]. Can anybody explain how they relate, and how they should be used together (or not)?

[1] https://www.w3.org/TR/activitypub/

[2] https://w3c.github.io/did-core/

[3] https://www.w3.org/TR/vc-data-model/


The notion of decentralized identity has been an enchanting vision since Christopher Allen first articulated it in 2016. Since then, DID spec has been around for years in draft form, and there are at least a dozen vendors and/or projects producing DID-compatible or DID-relevant technology.

Of course, these different packages are not (yet) compatible, but that's not the problem. The problem is that, after a good 4 or 5 years, it's hard to find a single project that uses DID protocols at scale in a worthwhile and effective manner.

There are tons of pilot projects and PoCs. A few go into production at limited scale, languish for a while, and then do a slow fade.

I agree with other commenters that DID does not seem to address real-world pain points. I also think that the spec appears murky, abstract, overly complex and hard for developers to work with. I have tried to use DID in projects a couple of times, and found myself sidelining or pushing it into a corner of the system, because it did not seem to serve a useful purpose.

There's a recent alternative to DID, which is narrower in scope and more pragmatic. That is "login with Metamask" or "sign-in with Ethereum" (or something similar in the case of other blockchain platforms).


The SIWE movement probably will win out, imo. Brave already supports it natively, and services like ENS are growing in popularity and use cases. And MetaMask itself is already at 50 million installs. Most other smart contract blockchains are Ethereum compatible, so you can use the same account (via MetaMask or wallet of your choice) across blockchains pretty easily. It's also dead simple to implement as a web developer, and it's a pleasure to use as a user.


> Of course, these different packages are not (yet) compatible

You say this is not a problem, and immediately say this:

> it's hard to find a single project that uses DID protocols at scale in a worthwhile and effective manner.

Of course it's hard. For a protocol to operate at scale its implementations have to be compatible


Wow, the person who wrote that text has some talent for bureaucratese. It's comparatively rare to see that in English, since the language tends toward clear verbs and the active voice. But here I had to re-read a bunch of sentences to figure out what refers to what, while wondering if I need to take a coffee break. I would say that the author probably moonlights as a writer for NYT or something—if the dryness of the document wasn't quite outstanding, beyond what is still considered fit for consumption.


> It is not questioned that any single DID method might fail to achieve one or more of these properties. The consideration here is whether the proposed DID identifier syntax and associated mechanisms has been sufficiently shown to have defined an extensible class of identifiers that has these properties.

This paragraph gave me temporary brain fog. I think it's saying that so long as the proposed syntax is flexible enough that one or more DID methods can satisfy... and then I'm lost again.


Maybe like saying any HTML element alone can’t build you say… Slack, but with a bunch of them you can.

Tldr: it is lego?


The objections by Mozilla & Google make sense if you assume by denying their requests that the methods should be define prior to moving forward, but to me, the core as defined is more than enough to move forward and the next step is to define the methods.

Worst case, the core is flawed and the specs are revised to align to what has been learned flushing out the methods or it is abandoned for whatever reason. Mozilla & Google saying they object based on everything not being defined sounds like the opposite of progress to me.

The core already lays out specs for methods and clearly they’re good enough for other workgroups to already be moving forward refining methods for specific use cases. Here’s an example:

https://identity.foundation/peer-did-method-spec/

If there’s an significant issue with moving forward, I am not understanding it.


There is validity in the W3C position as well as the objections raised by the various parties. However, the respective positions of the parties are on different axes.

It is helpful to look at the DID-core as the WHAT with the methods of the DID to specify HOW.

The methods set is left open by W3C (i.e., an item in the method registry) and rightfully so. The objectors want it to be a defined, possibly closed set before it moves to Recommendation track.

To see why this makes sense, suppose I am a service provider and I need identity services to authenticate and authorize. If I define the data elements that constitute identity (eg: name+phone or emailaddr or nationalid etc.,) in my application. The then DID allows the server and the client to agree on identity by exchanging the DID document and verifying the claims in it using the methods in the DID document.

If we need flexibility in the set of dataelements that constitute identity, then the methods MUST be kept open. The method is only an interface contract that specifies how to validate a specific DID.

Suppose there is a method that relies on nationalid then any future service that also supports the method should be able to interoperate. Whether a service implements that interface or not is a choice that the service can make.

By decoupling the WHAT from the HOW, I could have a fully decentralized identity system (perhaps with services provided by the OS or apps) and sharing only zero-knowledge-proofs with the counterparties without sharing underlying information (or only information necessary for the transaction).

I think this makes sense and is a step in the right direction.


I don’t understand the point of having a specification when 2 out of the 3 major browsers have objected. Who will implement it? Why bother with this?


The W3C doesn't produce only browser-related specifications. Service providers will implement, eg. instead of login with a user/password they will support some the DID methods.

I would say that the process worked as intended. There was disagreement among members, things got discussed (see https://www.w3.org/2022/03/did-fo-report.html for details) and a decision was made according to the W3C process. All good!


Having read through the abstract and the first example of the specification I'm highly skeptical that the W3C process worked to produce something that is actually useful, in the sense that it ends up widely used.

DID kind of strikes me as ASN.1 "with crypto/distributed ledger" stuff tacked on top of it.


Because expecting standardization of DID methods at this point is unreasonable. A premature attempt to standardize DID methods would be both futile and likely harmful.

It's futile because the future universe of DID methods can't be anticipated now, so whatever wrong set of DID methods W3 promulgated would include both poor choices and omit good choices. It's harmful because whatever future methods might emerge will relegated to a second class for having failed to 'get in' on the initial standard.

Better to avoid prematurely enshrining some arbitrary set of methods and allow a consensus to emerge via practice and exposure. At some point, as the inevitable shake-out of bad ideas and nefarious actors occurs, DID methods can be standardized in a useful way. Yes, the lack of a simple list of SHALLs will impede the immediate adoption of DID for all conceivable purposes, but better that struggle than the next to impossible task of loosening the grip of beneficial parties that have a standards document to wave around.

It's almost like they've learned something over the last quarter century.


If you have literally no idea what a good or bad answer even looks like, it's not time to standardize it at this sort of formal level yet.

Beyond that, some of this is just odd. " It's harmful because whatever future methods might emerge will relegated to a second class for having failed to 'get in' on the initial standard."

Good. We can deal with that in v2.0.

Trying to design this kind of thing to anticipate every possible future good thing that might come along is a folly. If you can't standardize them yet because you don't even know, then i go back to the first sentence i wrote :)

Beyond that, your optimism in what will happen (shakeout of bad ideas and then harmonious replacement with standards) seems ... mostly misplaced.

Assume it takes off - what will instead happen is that you will be stuck supporting tons of non-standard methods developed between now and when anyone standardizes them forever. It will likely hamstring your future development as well. I cite as evidence - literally the history of everything :)

There was 100% no reason to standardize this now other than wanting to feel good about themselves. It isn't needed to push forward. It should have waited until someone had any idea what good looks like.


> Good. We can deal with that in v2.0.

There are many, many examples of v2.0 standards that, although better in every respect, never actually supplant v1. v1 has immense inertia.

It's not all as willy nilly as it might seem. W3C standardized the parts they could without painting themselves into a corner and wisely factored out the part they couldn't. There is an official W3C hosted registry of methods, each with a specification. They vary hugely in purpose, quality and applicability, all of which will eventually determine their viability. There are at least 3 based on the Bitcoin ('btcr', 'ion' and 'stack') block chain, for example. Which, if any, of these should W3C prefer?

The fact of this Cabirian explosion supports the W3C argument. Not selecting some arbitrary subset of these as SHALLs in the v1 standard affords room to discover the benefits and pitfalls given time and iteration.

I can understand the viewpoint of Mozilla, Google, et al. I'm sure they'd love to throw a room full of coders at a simple specification and knock out a planetary scale identity system, collect a few billion names and sell all that info to the highest bidders before the next quarterly report. It seems that W3C has other ideas about how this should work.


Yes, v1 often has immense inertia. This is true, but it will actually not make you any better at predicting the future, and as i said elsewhere, we are extremely bad at that already.

Step into the shoes of a developer 10 years ago, and look the world now. Look ohw much is changed in a way that would affect this stuff.

Trying to have successfully anticipated it simply won't work. Instead, you build standards to be easy to transition from/to.

That is what enables you to upgrade standards and get reasonable adoption of new versions.

While painful, that is the only path that ever works.


> Trying to design this kind of thing to anticipate every possible future good thing that might come along is a folly. If you can't standardize them yet because you don't even know

Whatever you design for the web stays on the web for decades. So yes, you have to look into the future and anticipate things. It's not "move fast and break things"


I'm not at all suggesting move fast and break things.

Yes, things stay for decades. So don't standardize something when you have literally no idea what good looks like.

There is a limit to anticipation. Trying to guess what the world will look like 10 years from now is dumb. Nobody does it well.

So just because v1 will have inertia doesn't mean you will actually predict the future more successfully. Instead, you have to figure out how you deal with that inertia.

In practice, the reason things have tremendous intertia is because they are often not built to be migrated from or to in a relatively seamless way.

Those standards that allow for easy migration tend to see much faster adoption of new versions.

Those that don't, the old standards linger even longer.


> It's futile because the future universe of DID methods can't be anticipated now, so whatever wrong set of DID methods W3 promulgated would include both poor choices and omit good choices

Or maybe it's just too soon to try to carve a "standard" into the w3c process stone. A half-baked protocol is worse than no protocol at all.


> A half-baked protocol is worse than no protocol at all.

I think the literal opposite is true, no?


If there is no protocol nobody expects interoperability.

If there is a half-baked protocol everyone expects interoperability but it never works as it should.


> never works as it should.

Not only that, by exploiting ambiguities and gaps, protocols can be made to work as they shouldn't. At least that keeps the security consultants in business.


> I don’t understand the point of having a specification when 2 out of the 3 major browsers have objected.

That's how standard bodies are supposed to work: by finding consensus. If there are many objections, the spec should be adjusted until people agree.

However, if you have enough clout, you can try and still ram it through. See hardware APIs. 2 out of 3 major browsers have objected.


Caveat: I know several people involved in the DID standards development and consider them friends.

Decentralized Identifiers (DIDs) are important because a decentralized global network with fully decentralized versions of things like Facebook, with users controlling their own data, may not be possible without them.

There is a lot in the DID specs. They are perhaps best viewed as an abstraction layer for decentralized authentication, authorization, rights management, and messaging. There are many ways of implementing these standards, and this is accomplished by allowing many different DID methods. Some methods, like 'peer', do not use public sources of truth like blockchains. Many of the various methods use some given blockchain as a public source of truth. Some use a distributed file system like IPFS. The abstraction in the DID specs should allow all of these methods to interoperate (e.g., have a IPFS DID document with a btcr controller, btcr being one of several DID methods using the Bitcoin blockchain).

DID methods are not a wild west however, despite the picture painted by some. There are registries for recognized DID methods that impose controls on DID method specs before they can be listed in the registry [1].

Also, I think any DID support will likely require a plugin mechanism. The app implements the DID abstraction layer and DID common functionality, then offloads DID method specific functionality to an available plugin for that method or signals an error if no plugin for that method is available. It is ironic to me that Mozilla raised the objection here, because in my mind the plugin system that made people aware of plugins is the Firefox plugin system.

[1] https://www.w3.org/TR/did-spec-registries/


My take is that the Director of the DID working group is tired and wants this to be over. They want to enjoy a few months break and some future Working Group can deal with the problems.

We've all been there. Coding is complete, but QA comes back with some fundamental issue that might require us to redo the design of the program. We'd rather not.

Winners ship.


My TLDR. DID is already a registered URI scheme [1]. The method on a DID [2] is more or less a URI sub-scheme / protocol. Its for the blockchain / web3 crowd for something like the definition of a NFT. Most of their startups will shut down in a year or two anyways. Won't really matter. No one is going to manually type these in, or understand them by reading them. I'd agree that there really isn't a point of declaring it a standard as the DID scheme is already registered.

[1] https://www.iana.org/assignments/uri-schemes/prov/did

[2] https://www.w3.org/TR/did-core/#methods


An "Explain Like I'm Five" of what DID is (for those who don't know, like me):

"Decentralized identifiers (DIDs) are a new type of identifier that enables verifiable, decentralized digital identity. A DID refers to any subject (e.g., a person, organization, thing, data model, abstract entity, etc.) as determined by the controller of the DID. In contrast to typical, federated identifiers, DIDs have been designed so that they may be decoupled from centralized registries, identity providers, and certificate authorities. Specifically, while other parties might be used to help enable the discovery of information related to a DID, the design enables the controller of a DID to prove control over it without requiring permission from any other party."

https://www.w3.org/TR/did-core/


So, it's like business cards?

They represent your identity, have a relatively standardized format, but anyone can issue them, decide what to put on them and where the contact methods point, and you can have as many number and different versions as you wish.


How does one prove they own the DID?


That's up to the 'method' part, which is what all the fuss is about.


Ok, so if I show my did to someone's they're going to reach out to the method and ask for verification. How does the method place know that the person using the did is the right person? I assume it's some kind of federation? They send me over to the method place, I authenticate, and get kicked back?


Every method currently has different answers for every single one of those questions, directly leading to Google and Mozilla's complaints here that there's already a "registry" of 50+ methods, no actual standardization among them, and it's all punted to "ask the method".


W3C has been on the sidelines for a while now. WHATWG--mostly Google people--calls the shots.


Seems it's going to flop as hard as the Semantic Web and RDF, if not harder, given Google and Mozilla are not going to implement it, and Apple never implements anything anyway,


Hopefully this heaping festering pile of droppings called Decentralized Identifiers will go away.


Great post


Sure W3C, let's just add 30 new APIs to track users mouse activity, proximity, fonts, etc.


From Mozilla's objection: "The lack of restrictions on the registry are allowing methods ... which are actively globally harmful to sustainability."

That seems to me like Mozilla trying to push their social justice goals down into tech standards now.


That's a rather flamebaity opinion, but you have a point.

It's like Mozilla are saying "We have centrally planned the number of carbon credits that may be spent by each technology, and determined that blockchains are a forbidden technology. We will therefore undermine any efforts of other people to implement standards where using blockchains is even a possibility."




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: