That's not true. It also has old, deprecated crypto. It still works, but it has problems, and not all installations support the same things. I am not well versed enough to regurgitate what other cryptographers here and elsewhere have said about it, but there are lots of problems.
That entirely works.Any sort of realistic crypto standard is going to have to support older systems right up to the point where any problems with those systems actually reduce security in some way. To do otherwise would be irresponsible to your users.
tl;dr: we seem to disagree on what users should want from age—confidentiality or also authentication—and that would lead not only to different design choices, but to drastically different UXs.
> ”Unfortunately, the age spec doesn’t document its threat model or the security goals it is intended to achieve ... Most importantly, the spec should define its security goals.”
I'm wondering if this feedback has been shared with age's author. To me at least is seems like a better place for this would be age's github issues instead of the author's blog.
I have a lot of trouble understanding the issue in the comparison with JOSE. The problem with JOSE's header was that many libraries offered an API that was "validate that this is a valid token using one of these keys", but a key was just one part of the unauthenticated header. An attacker could create a token that used one of the keys with an algorithm it didn't belong to and create a token that was signed by one of the whitelisted keys in a wrong algorithm. The solution was to make sure all the libraries had the API where the user whitelisted allowed values of the entire header: "validate that this is a valid token using one of these headers (alg+key)".
I'm not sure what the corresponding version of the first issue would be in age, and then why the answer wouldn't be for age to make sure to check the entire header against a whitelist instead of considering just a subset of it.
Yes, but no such tool exists (yet) as far as I’m aware. You also need to tie the chunks together so that they cannot be rearranged, duplicated, or deleted.
You also need to be careful about what you think this tells you. For example, suppose you ran a competition where the winner is the first person to upload an encrypted+signed correct answer to a shared folder. An attacker can wait for somebody to upload the winning answer and then simply strip the real winner’s signature off and sign the encrypted blob (which they can’t decrypt) with their own private key - hurray, the attacker has now won the prize!
If you reverse the order of signing and encryption you can run into bugs like [1].
You can securely combine public key encryption and signatures by including extra metadata fields inside each layer. Or you can use a function that provides public key authenticated encryption like NaCl’s crypto_box or the mode I’ve proposed for JOSE [2].
As a user, if I have a key and an encrypted file, aside from actually decrypting the file, all I care about is ensuring that the software can confirm that the encrypted file was encrypted using the same key. Why should this be complicated?