I don't really have a good answer for you. Invite-based services are a good way to scale out without getting a huge spike all at once (even Gmail was introduced using invites), but I don't know how long Keybase intends to stick with the invite model or when they'll expand it.
As for services, each service that's supported presumably needs custom code, and also needs to have a publicly-auditable way to post a proof that cannot be done by anyone other than the owner of the account (e.g. for Twitter you need to actually tweet something, for GitHub you need to post a gist, etc). I don't have a LinkedIn account or use GitLab so I don't know if those services have an effective way to handle such a proof.
This is awesome! I was at bar with a group of security people / cryptographers during the Real World Crypto conference last week. We were discussing what we thought some of the most important security research papers from the last five years were. Everyone agreed that CONICKS (which is what Key Transparency is based on), will likely have a huge impact in the next five years or so. I'm excited to see Google finally open-sourcing their efforts in Key Transparency.
Though I guess most have just moved to walled garden encryption models instead of GPG - Moxie talks a bit about that here: https://moxie.org/blog/gpg-and-me/
Looking at the documentation in the repo, it sure looks to me like it's just a keyserver with a provable append-only model so you can audit every change. My cursory glance did not come up with any way to prove that the keys for a given account were actually added by someone with the authority to do so (there was even a screenshot showing that users could see when new unrecognized keys were added - https://raw.githubusercontent.com/google/key-transparency/ma... - which suggests that it relies on the user themselves to notice something going wrong).
The Design Overview does reference the fact that a user would have account credentials, and suggests that an attacker would have to compromise those credentials in order to change the keys on the user's account. So that's good. But that's still a single point of failure.
Compare this with keybase.io, where if an attacker compromises your account, they can't associate a new key with e.g. your twitter account thus tricking anyone who knows you over twitter into using your new key (they'd have to remove your twitter account from your profile in order to change the key). So keybase.io uses the variety of social networks as proof that your key really is owned by you, whereas Key Transparency seems to rely entirely on other people noticing via the key transparency history that the keys on the account all changed at some point (which doesn't necessarily even mean the account was compromised, it's possible the user did that deliberately because e.g. they lost control of their private key).
Note: I must repeat that I gave this only a cursory glance, and I didn't pay attention to some of the potentially-interesting properties here such as privacy protection. It's very plausible that Key Transparency does have valuable properties that keybase.io doesn't provide.
Is this something more than a public key server?
Unlike a simple public key server, this provides privacy protecting elements. The project utilizes Zero Knowlege Proofs (ZKP) to limit your access to keys associated with users you already know.
How is this different than KeyBase?
It is similar. Some differences include the use of ZKP, it is a generic key store designed (could be used for PGP or E2E messaging), it is designed to scale to trillions of entries, anyone can run a log, and architecturally it should scale significantly better. KeyBase is doing great work, though.
Keybase does not need a PGP setup. Keybase has moved on from that and use NaCl to solve the multi-device problem. GPG is just one more node in your trust chain.
In Certificate Transparency, a CA is responsible for sharing its issued certificates to CT. In Key Transparency, anyone can register a key for an email address; they just have to arrive first. Is there any provision for preventing squatting ?
One of the differences between Key Transparency and other solutions is the role of certifying and logging have been separated. In other words, being in the directory does not mean the identity has been verified. The verification of control of an email address is the role of the certifier. Your requirements of the certifier are an application specific decision.
But isn't "certifying" the part that actually matters (making sure I have the right key for a persona / the right persona), and AFAICT, Key Transparency is focused on the logging aspect.
This is what I'm really not seeing in the blog post / GitHub repo; how does this actually establish trust in the received key?
(While it is true that the owner of the key can audit their account, that doesn't help a sender. Also, if watching people "verify" SSH & GPG keys has taught me anything, it's that even engineers who should know better are way too lazy.)
Ok I see, KT provides a visible log of all changes happening to a given key, but my initial question still stands: I can squat, say, eschmidt@google.com today and wait for someone to pay me a huge amount of money to give up that name ?
I'm not 100% sure, but it sounds like that's outside the scope of key transparency, and Google is envisioning that the ["certification authority that the system [represents]"][1] would verify that you indeed own `eschmidt@google.com` before letting you register an account using that address.
Do I understand correctly using keytransparency I'm backing Google to be the authority to verify that a key is indeed 'me'? Why would I want to do that? Why would anyone want to do that?
This description sure sounds like exactly what https://keybase.io provides.