Hacker News new | past | comments | ask | show | jobs | submit | purkka's comments login

This is a Gmail-specific feature. I'd guess it's there for user convenience and some protection against typos (accidental or malicious).

https://support.google.com/mail/answer/7436150?hl=en


Came across this today. Especially the collection highlights on Wikipedia [0] really made my day.

[0]: https://en.wikipedia.org/wiki/Museum_of_Bad_Art#Collection_h...


No pictures though. Wish I could see some sample of the art.


This is said to be the most iconic work in the collection:

https://arthur.io/art/unknown/lucy-in-the-field-with-flowers


just go to Collections on their main page, OP's link.


A morbid wish. Something like wanting to look at photos from a murder investigation.


> H0(H1(m)) has the security of just H0. Hashes are not made to protect the content of m, but instead made to test the integrity of m. As such, a flaw in H0 will break the security guarantee, no matter how secure H1 is.

But this isn't true for all flaws. For example, even with the collision attacks against SHA-1, I don't think they're even remotely close to enabling a collision for SHA-1(some_other_hash(M)).

Similarly, HMAC-SHA-1 is still considered secure, as it's effectively SHA-1-X(SHA-1-Y(M)), where SHA-1-X and SHA-1-Y are just SHA-1 with different starting states.

So there's some value to be found in nesting hashes.

[1]: https://en.wikipedia.org/wiki/HMAC#Definition


We are saying the same thing. H0 is SHA-1 in your example.

The strength of an HMAC depends on the strength of the hash function; however, since it uses two derived keys, the outer hash protects the inner hash (using the same hash function), which in turn provides protection against length extension attacks.

The case I was making, is that weakhash(stronghash(m)) has the security of weakhash, no matter how strong stronghash is.


> The case I was making, is that weakhash(stronghash(m)) has the security of weakhash, no matter how strong stronghash is.

I'll have to disagree. There are no known collision attacks against SHA-1(SHA-3(M)), so in the applied case, a combination can be more secure for some properties, even if it isn't in the theoretical case.


There is only SHA-1 with a fixed starting state!

Once you change the IV the hash becomes entirely insecure and can be broken in seconds. You just need to overwrite the first IV word with 0, and it's broken. It's a very weak and fragile hash function. They demonstrated it with internal constants K1-K4, but the IV is external, and may be abused as random seed.


I have to wonder how a company at the scale of GitHub can be so bad at keeping track of their status.

Now 4 out of 10 services are marked as "Incident", yet most of the others are also completely dead.


It's because of the way most companies build their status dashboards. There are usually at least 2 dashboards, one internal dashboard and one external dashboard. The internal dashboard is the actual monitoring dashboard, where it will be hooked up with other monitoring data sources. The external status dashboard is just for customer communication. Only after the outage/degradation is confirmed internally, then the external dashboard will be updated to avoid flaky monitors and alerts. It will also affect SLAs so it needs multiple levels of approval to change the status, that's why there are some delays.


> The external status dashboard is just for customer communication. Only after the outage/degradation is confirmed internally, then the external dashboard will be updated to avoid flaky monitors and alerts. It will also affect SLAs so it needs multiple levels of approval to change the status, that's why there are some delays.

This defeats the purpose of a status dashboard and is effectively useless in practice most of the time from a consumers point of view.


From a business perspective, I think given the choice to lie a little bit or be brutally honest with your customers, lying a bit is almost always the correct choice.


My ideal would be if regulations which made it necessary that downtime metrics had to be reported with at most somewhere between a 10m and 30m delay as "suspected reliability issue".

If your reliability metrics have lots of false positives, that's on you and you'll have to write down some reason why those false positives exist every time.

Then that company could decide for itself whether to update manually with "not a reliability issue because X".

This lets consumers avoid being gaslighted and businesses don't technically have to call it downtime.


Liability is their primary concern


This is intentional. It's mostly a matter of discussing how to communicate it publicly and when to flip the switch to start the SLA timer. Also coordinating incident response during a huge outage is always challenging.


That it may be but there’s no excuse.

Declare an incident first, investigate later.

Cheating SLAs by delaying the incident is a good way to erode trust within and without.


> Declare an incident first, investigate later.

If that would be the best way to deal with it- why is literally no one doing it this way and what does that tell you?


because it involves admitting that you messed up which companies are often disensentivized to do


False positives?


The flipside of this is availability. Your T2 coprocessor is now permanently tied to your data. This means if the chip dies, there's no recovery unless you have a backup encrypted with a separate key (with its own confidentiality/availability tradeoff).

(And if anything else on your motherboard dies, Apple's official answer is "you're f*cked", since they refuse to do board-level repair.)

For the threat model of most users, where hardware-based targeted attacks aren't a big concern, this is a bad tradeoff.


It's sort of a weird space we're in in the modern world, where you have to assume that anything on a "device" is ephemeral and fragile, and those of us concerned with data persistence on local hardware need to work at having a path to verifiably-restorable backups.

Cloud is a great solution for most people. But not really an option for "where do I put my decades-stale collection of old home directories" or "mbox files from email in the late 90's".


> For the threat model of most users, where hardware-based targeted attacks aren't a big concern, this is a bad tradeoff.

> hardware-based targeted attacks

You mean physical-access attacks, correct? Is it really just these kinds of attacks that a T2 chip protects against?

AFAIK if malware has super user privilege, it can access the RAM of other processes, and therefore it can access the encryption keys stored in RAM by other processes.

If those processes could have used an encryption API that does the encryption on the chip, and therefore not need to store encryption keys in RAM, they'd be protected against this kind of attack, a kind of attack that is not hardware-based.


Considering those keys are loaded into RAM for/whilst unencripting, i don't see how it matters, cause the malware should have access to the (now) unencripted data regardless.


> if the chip dies,

I've heard of zero T2s dying. I've hard of android data recovered (TFA)


I'd guess what they mean is there are two clusters with some clear distinguishing properties, and perhaps some resulting implications for treatment.


This isn't to say they wouldn't use "regular width" roads during wartime if necessary. No point taking that risk during peacetime.


Apple could still use keys to validate the module is genuine. Then you just need to trust Apple to not release compromised modules. They need to just stop pairing the individual modules to the phone.


Interestingly, no kernel vulnerability or anything is mentioned.

As far as I know, any parsing code for iMessages should run within the BlastDoor sandbox – is there another vulnerability in the chain that is not reported here?


One CVE is in Wallet and Citizen Lab mention PassKit. My guess is that BlastDoor deserializes the PassKit payload successfully, then sends it to PassKit which subsequently decodes a malicious image outside of BlastDoor.


Yup. You can just have your crafted webp (This is the patch for the ImageIO bug https://chromium.googlesource.com/webm/libwebp/+/902bc919033...) image with the .png extension (inside your passkit - https://developer.apple.com/library/archive/documentation/Us...) and you send it to your target..


I think you're right but I don't see any detailed information from The Citizen Lab. I expect a lot more information after some embargo timer runs out.

For context, here's another report from them outlining a similar vulnerability: https://citizenlab.ca/2021/08/bahrain-hacks-activists-with-n...


It may be the case that either the kernel vulnerability hasn't been analyzed or fixed yet, or that they were not able to capture it. Many of these exploits have multiple stages and grabbing the later ones is difficult.


Is that still true when you control the source code and compiler? Or just for external researchers?


It’s not that reverse engineering is the challenge but that the exploit likely gets downloaded from a server that isn’t online anymore.


Probably 3 - sandbox escape, being able to launch a process in privileged mode and something that adds to the kernel table of allowed hashes.

But it is totally possible for them to have been able only to identify one of them if they didn't intercept the whole attack.


Image I/O has been mentioned elsewhere. Suspect it's code that's running in process from that library that's doing it.


Given that the second is an older unit [0] than the redefinition of the metre, and defined based on "nice" subdivisions of the day, it would seem that there's still a bit of a coincidence there.

Since the metre was previously defined by the seconds pendulum, it was entirely defined by the definition of a second and the value of g. From the equations, 1 m = 1 s² × g / π².

While this makes g ≈ π² straightforward, it seems coincidental that the Earth's circumference was close enough to 40 000 km that the redefinition of the metre was a nice power or 10 without too much change to the metre.

[0]: Late 16th century, based on https://en.wikipedia.org/wiki/Second#Fraction_of_solar_day


Well, it's not much of a coincidence. There are all kinds of constants around us, and one of them is prone to be close to a round number.

If the Earth's circumference wasn't a nice number, people would have chosen another one.


Was the meter based on the length of the pendulum similar to the length of the meter today? This doesn't necessarily say they were similar:

> In 1675, Tito Livio Burattini suggested the term metre for a unit of length based on a pendulum length, but then it was discovered that the length of a seconds pendulum varies from place to place.

https://en.wikipedia.org/wiki/Metre#Pendulum_or_meridian


The period is also not independent of the amplitude. That is only the case of you approximate sin x ≈ x in the differential equation


My understanding (which could be wrong) is that actual clocks use a fancier pendulum than just a weight on the end of a string.


The difference in gravity around the Earth is small enough that the pendulums would be within a couple percent. (Wikipedia claims a measured difference of 0.3% from the time.)

Assuming the second was also quite accurate, the seconds pendulum wouldn't be too far from its current definition given that g ≈ π² to within ~1 % in modern units.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: