> H0(H1(m)) has the security of just H0. Hashes are not made to protect the content of m, but instead made to test the integrity of m. As such, a flaw in H0 will break the security guarantee, no matter how secure H1 is.
But this isn't true for all flaws. For example, even with the collision attacks against SHA-1, I don't think they're even remotely close to enabling a collision for SHA-1(some_other_hash(M)).
Similarly, HMAC-SHA-1 is still considered secure, as it's effectively SHA-1-X(SHA-1-Y(M)), where SHA-1-X and SHA-1-Y are just SHA-1 with different starting states.
So there's some value to be found in nesting hashes.
We are saying the same thing. H0 is SHA-1 in your example.
The strength of an HMAC depends on the strength of the hash function; however, since it uses two derived keys, the outer hash protects the inner hash (using the same hash function), which in turn provides protection against length extension attacks.
The case I was making, is that weakhash(stronghash(m)) has the security of weakhash, no matter how strong stronghash is.
> The case I was making, is that weakhash(stronghash(m)) has the security of weakhash, no matter how strong stronghash is.
I'll have to disagree. There are no known collision attacks against SHA-1(SHA-3(M)), so in the applied case, a combination can be more secure for some properties, even if it isn't in the theoretical case.
Once you change the IV the hash becomes entirely insecure and can be broken in seconds. You just need to overwrite the first IV word with 0, and it's broken. It's a very weak and fragile hash function. They demonstrated it with internal constants K1-K4, but the IV is external, and may be abused as random seed.
It's because of the way most companies build their status dashboards. There are usually at least 2 dashboards, one internal dashboard and one external dashboard. The internal dashboard is the actual monitoring dashboard, where it will be hooked up with other monitoring data sources. The external status dashboard is just for customer communication. Only after the outage/degradation is confirmed internally, then the external dashboard will be updated to avoid flaky monitors and alerts. It will also affect SLAs so it needs multiple levels of approval to change the status, that's why there are some delays.
> The external status dashboard is just for customer communication. Only after the outage/degradation is confirmed internally, then the external dashboard will be updated to avoid flaky monitors and alerts. It will also affect SLAs so it needs multiple levels of approval to change the status, that's why there are some delays.
This defeats the purpose of a status dashboard and is effectively useless in practice most of the time from a consumers point of view.
From a business perspective, I think given the choice to lie a little bit or be brutally honest with your customers, lying a bit is almost always the correct choice.
My ideal would be if regulations which made it necessary that downtime metrics had to be reported with at most somewhere between a 10m and 30m delay as "suspected reliability issue".
If your reliability metrics have lots of false positives, that's on you and you'll have to write down some reason why those false positives exist every time.
Then that company could decide for itself whether to update manually with "not a reliability issue because X".
This lets consumers avoid being gaslighted and businesses don't technically have to call it downtime.
This is intentional. It's mostly a matter of discussing how to communicate it publicly and when to flip the switch to start the SLA timer. Also coordinating incident response during a huge outage is always challenging.
The flipside of this is availability. Your T2 coprocessor is now permanently tied to your data. This means if the chip dies, there's no recovery unless you have a backup encrypted with a separate key (with its own confidentiality/availability tradeoff).
(And if anything else on your motherboard dies, Apple's official answer is "you're f*cked", since they refuse to do board-level repair.)
For the threat model of most users, where hardware-based targeted attacks aren't a big concern, this is a bad tradeoff.
It's sort of a weird space we're in in the modern world, where you have to assume that anything on a "device" is ephemeral and fragile, and those of us concerned with data persistence on local hardware need to work at having a path to verifiably-restorable backups.
Cloud is a great solution for most people. But not really an option for "where do I put my decades-stale collection of old home directories" or "mbox files from email in the late 90's".
> For the threat model of most users, where hardware-based targeted attacks aren't a big concern, this is a bad tradeoff.
> hardware-based targeted attacks
You mean physical-access attacks, correct?
Is it really just these kinds of attacks that a T2 chip protects against?
AFAIK if malware has super user privilege, it can access the RAM of other processes, and therefore it can access the encryption keys stored in RAM by other processes.
If those processes could have used an encryption API that does the encryption on the chip, and therefore not need to store encryption keys in RAM, they'd be protected against this kind of attack, a kind of attack that is not hardware-based.
Considering those keys are loaded into RAM for/whilst unencripting, i don't see how it matters, cause the malware should have access to the (now) unencripted data regardless.
Apple could still use keys to validate the module is genuine. Then you just need to trust Apple to not release compromised modules. They need to just stop pairing the individual modules to the phone.
Interestingly, no kernel vulnerability or anything is mentioned.
As far as I know, any parsing code for iMessages should run within the BlastDoor sandbox – is there another vulnerability in the chain that is not reported here?
One CVE is in Wallet and Citizen Lab mention PassKit. My guess is that BlastDoor deserializes the PassKit payload successfully, then sends it to PassKit which subsequently decodes a malicious image outside of BlastDoor.
It may be the case that either the kernel vulnerability hasn't been analyzed or fixed yet, or that they were not able to capture it. Many of these exploits have multiple stages and grabbing the later ones is difficult.
Given that the second is an older unit [0] than the redefinition of the metre, and defined based on "nice" subdivisions of the day, it would seem that there's still a bit of a coincidence there.
Since the metre was previously defined by the seconds pendulum, it was entirely defined by the definition of a second and the value of g. From the equations, 1 m = 1 s² × g / π².
While this makes g ≈ π² straightforward, it seems coincidental that the Earth's circumference was close enough to 40 000 km that the redefinition of the metre was a nice power or 10 without too much change to the metre.
Was the meter based on the length of the pendulum similar to the length of the meter today? This doesn't necessarily say they were similar:
> In 1675, Tito Livio Burattini suggested the term metre for a unit of length based on a pendulum length, but then it was discovered that the length of a seconds pendulum varies from place to place.
The difference in gravity around the Earth is small enough that the pendulums would be within a couple percent. (Wikipedia claims a measured difference of 0.3% from the time.)
Assuming the second was also quite accurate, the seconds pendulum wouldn't be too far from its current definition given that g ≈ π² to within ~1 % in modern units.
https://support.google.com/mail/answer/7436150?hl=en