> I think BLAKE2 is going to become the de jure SHA2 successor at least until SHA3 hardware acceleration becomes ubiquitous.
I think you mean "de facto", unless you think NIST is going to amend SHA-3 at this point to designate BLAKE over Keccak.
As someone on the sidelines, having read the linked Twitter discussion you had with Marc Stevens your summary of it honestly seems a bit disingenuous.
You mention speed prominently, but fail to mention his counterargument that raw hash speed isn't relevant for most applications.
In practice hashing speed is drowned out by other things, you're not going to have "Android/iOS devices" (as you bring up) hashing GBs of data as a common use-case, and even if you did the cycles/byte for the hash are nothing compared to other things.
For applications where hashing speed does matter (e.g. some server needing to batch-hash things) you have the option of buying hardware-accelerated SHA-256 to get on the order of 30% faster than BLAKE: https://bench.cr.yp.to/results-hash.html
Then as you note downthread your criteria of "at least as secure" only takes into account "known attacks". The absurd logical conclusion of that criteria taken at face value is that we'd all be better off if we each used our own bespoke hash function, since cryptanalysis would never be able to keep up.
Or, in other words, if the algorithm that became SHA-1 hadn't been picked by NIST in 1995 it would be a viable 160-bit hash function today, since there would likely be no known attacks against it, as it would have been obscure enough that nobody would have bothered with it.
So the criteria for a "secure" hash must consider some balance of its algorithm, as well as (or more importantly) the cumulative amount of attention cryptographers have spent on it.
> The absurd logical conclusion of that criteria taken at face value is that we'd all be better off if we each used our own bespoke hash function, since cryptanalysis would never be able to keep up.
The reason this "logical conclusion" sounds so absurd is that your reasoning here is absurd.
What matters isn't "cryptanalysts haven't published an attack", what matters is "cryptanalysts tried and failed to attack the design".
If you conflate the two, you will end up confused.
A reductio ad absurdum is always going to be absurd if taken at face value, but that doesn't mean the underlying point it's making isn't valid.
Obviously I'm not saying that BLAKE hasn't gotten any analysis. It was a SHA-3 finalist, it's been looked at by a lot of smart people.
I am saying it's a continuum, and it's safe to assume SHA-2 has gotten a lot more eyeballs & man hours by now. Both due to its age, and due to its widespread production use as a NIST standard.
Is that a trite point? Yes, but one you managed to omit in your long summary of a Twitter exchange with a SHA-1 researcher, which is why I'm pointing it out. You're seemingly treating "analyzed" as a boolean property.
It's trite because it should go without saying, but one you managed to selectively omit when summarizing a Twitter discussion. That makes you seem disingenuous.
It's clear from anyone who reads that Twitter discussion that the thrust of Marc's point is not that we should be "reinforcing public trust in standards". To summarize the discussion in those terms amounts to attacking a strawman.
"Marc Stevens wants to reinforce the public trust in standards, especially among non-experts" isn't an uncharitable summary of this conversation.
> That makes you seem disingenuous.
If I'm mistaken about the point he's defending, it's not an act of dishonesty.
Given that he at multiple points agreed with my arguments that standards bodies make dumb mistakes and yet continued to double down on "we should just follow standards anyway" without caveat, there aren't many alternative interpretations that I'm aware of.
If I still seem disingenuous, it might be that you want me to seem that way. In which case, there's no point in either of us continuing to participate because it has ceased to be about security and instead has become a discussion of ego, which I'm frankly uninterested in.
Actually, I did add caveats and do recognize that not all standards are equally important. In the tweets you link I'm talking about secure standards, as in standards that are actually considered secure.
Rather that it is better to promote to work through standards as a community as opposed to your way of where individuals promote their pet primitives.
Standards serve a role as a focal point: the benefit of being big targets, getting a lot of scrutiny and easier for non-experts to find out if somebody found security issues.
I think you mean "de facto", unless you think NIST is going to amend SHA-3 at this point to designate BLAKE over Keccak.
As someone on the sidelines, having read the linked Twitter discussion you had with Marc Stevens your summary of it honestly seems a bit disingenuous.
You mention speed prominently, but fail to mention his counterargument that raw hash speed isn't relevant for most applications.
In practice hashing speed is drowned out by other things, you're not going to have "Android/iOS devices" (as you bring up) hashing GBs of data as a common use-case, and even if you did the cycles/byte for the hash are nothing compared to other things.
For applications where hashing speed does matter (e.g. some server needing to batch-hash things) you have the option of buying hardware-accelerated SHA-256 to get on the order of 30% faster than BLAKE: https://bench.cr.yp.to/results-hash.html
Then as you note downthread your criteria of "at least as secure" only takes into account "known attacks". The absurd logical conclusion of that criteria taken at face value is that we'd all be better off if we each used our own bespoke hash function, since cryptanalysis would never be able to keep up.
Or, in other words, if the algorithm that became SHA-1 hadn't been picked by NIST in 1995 it would be a viable 160-bit hash function today, since there would likely be no known attacks against it, as it would have been obscure enough that nobody would have bothered with it.
So the criteria for a "secure" hash must consider some balance of its algorithm, as well as (or more importantly) the cumulative amount of attention cryptographers have spent on it.