Hacker News new | past | comments | ask | show | jobs | submit login
Technical Details on the Recent Firefox Add-On Outage (hacks.mozilla.org)
467 points by headalgorithm on May 9, 2019 | hide | past | favorite | 266 comments



Looks like a good read. I haven't finished reading it yet, but there's something I still don't get ...

Windows and macOS both have a signing infrastructure for apps. The rules of that infrastructure dictate only that apps must have been signed by a valid certificate at the time they were signed. That way old app downloads don't need to be periodically re-signed just to account for expiring certificates. I can download a 5-year-old version of 7zip or whatever and it runs just fine because it was signed with something valid to the timestamp in the signature. The process of distributing desktop apps would be utterly insane if this were not the case.

Not following this model for browser plugins seems unnecessarily cumbersome. Is it really worth requiring all browser plugins to be signed by a currently valid certificate? Is there a document or blog post where this is argued to be more appropriate?

I get that it arguably leads to more stringent security, but I'm not convinced by the delta improvement of that model over the desktop model, given the additional downsides. And the "let everything expire after a few years and resign it" process should not be used as a substitution for revocation. After all, if it were determined retroactively that a malware extension was signed does it really help everyone that it can't be loaded in a year or two, given the damage it could cause right now?


It seems Mozilla is in the process[0] of moving the signature scheme to COSE, which allows timestamping[1]. A code comment[2] says that the current package format doesn't allow it.

[0] https://bugzilla.mozilla.org/show_bug.cgi?id=1545836

[1] https://tools.ietf.org/html/rfc8152#section-4.5

[2] https://searchfox.org/mozilla-central/rev/b9da45f63cb5672449...


If that is true, they should mention this in the post-mortem.

Code signing is a well understood problem with a well known solution, but the blog post discusses everything except the well known solution.

Right now you have a problem caused directly by lack of time stamping, and the article doesn’t even acknowledge that.

That’s not inspiring confidence. I’m genuinely still not sure if they have understood what the actual problem is and how to solve it properly.


> If that is true, they should mention this in the post-mortem.

They might:

> We’ll be running a formal post-mortem next week and will publish the list of changes we intend to make

The lessons noted down here are just some thoughts by the author of this blog post:

> but in the meantime here are my initial thoughts about what we need to do.


Exactly, the article specifically calls for inventorying, not eliminating, "ticking time bombs." As for the inventory, dealing with non-ACME certificate creation without some kind of calendar/reminder mechanism is pretty crazy to me.


Nice password by the way.

I thought the mention of "ticking time bombs" showed someone is thinking about this properly because end users get the same experience if e.g. a timer gets treated as negative in 2038 or the browser depends on the century field being 20 as they do if an X.509 certificate expires. If you are sure you handled all certs, but you blow up because your GPS epoch wrapped then you still screwed up.


I share the confusion. I can understand wanting to re-check addon validity somehow to allow for recalls on malicious addons that somehow slipped through. But I'm not sure what threat is mitigated by allowing a recheck to fail just because the clock has advanced, if all the certs were valid at the time of the installation. (Unless placing time bombs is the actual intent?)

The post did say that they'll "be looking more generally at our add-on security architecture to make sure that it’s enforcing the right security properties at the least risk of breakage". I hope they'll be looking at that point here; if they want to support explicit revocation, there may be other ways to do it which aren't quite so prone to invocation by accident (e.g., publishing a revocation list signed by certs valid at the time of revocation).


The idea here may be to invalidate certain add-ons after they have been released in the wild by revoking the certificates used to sign them.


Only one certificate was used for all add-ons, its expiration disabled all signed. No “certain add-ons” it was “all.”


Per the article, there is a separate "end-entity" certificate for each add-on. They were all signed by the same "intermediate" CA, however.


> They were all signed by the same "intermediate" CA, however.

And the expiration of that "intermediate" obviously should have not made already accepted add-ons stop working, in this specific use-case. It's not about establishing a new trusted communication channel for a new content, it's not about disabling some specific add-on.

Thinking logically, that was not the role of that immediate certificate.

So the behavior was clearly designed wrong, because completely wrong analogy was used -- that of creating a connection, where expired intermediate certificate should prevent the new connection, which could an transfer a new attack if not verified. Here the verification already happened, and the "ban" of a specific set of add-ons was also clearly not a case.

Additionally it seems the handling of the update of the expired intermediate was not the topic of the design at all.


For the time stamp method to work, you need a trusted mechanism to attest that the timestamp is correct, otherwise the mechanism is useless (an attacker with an outdated private key can just backdate the timestamp in the executable and then sign it). Windows code signing uses a server Microsoft runs to provide this, and Mozilla would need to do the same.

I’m not saying they shouldn’t, but it is a significant piece of complexity.


> Windows code signing uses a server Microsoft runs to provide this, and Mozilla would need to do the same.

Mozilla already requires uploading even your private extensions to a Mozilla server to be signed for internal or external deployment.

https://wiki.mozilla.org/Add-ons/Extension_Signing


> Windows code signing uses a server Microsoft runs to provide this, and Mozilla would need to do the same.

Nope. Actually all CAs that offer code signing certificates are required to provide timestamping service compatible with RFC 3161. These timestamping servers are usually free to use, see for example https://knowledge.digicert.com/generalinformation/INFO4231.h...


Every major CA runs a trusted timestamping service. Mozilla doesn't need to maintain their own timestamping infrastructure, they could delegate to one of the CAs, probably based on some sort of formal agreement with them.

Though the way things stand, all CAs have no problem timestamping sigs made with certs that are from other CAs, so perhaps even no explicit agreement is required.


The checker just has to have roots tracing to timestamping CAs.

I really don't understand why Mozilla designed their system like they did. Code signing is well known and probably even done for the Windows installer of Firefox, why did they just not duplicate the model? Checking expiration vs current time makes absolutely no sense for code signing, esp. at runtime (it could kind-of sort-of make a little bit of sense at install time, but I'm not really convinced, and maybe it is even not really possible to distinguish between the two with an effective boundary)


Can you explain why you need the current time anywhere in this? Say I download a ten-year-old addon, it's signed by a valid signature from that root, with a valid signing date. What's the problem?

Are we worried that someone will steal an old/expired cert and have control over a user's clock?


Imagine you have an old expired key... you take your new malicious extension, and sign it with the expired key and a time stamp that says it was signed at a time the key was still valid.

Without some other verification mechanism, you can't tell the difference between this and an actual signature signed when the key WAS valid


> sign it with the expired key and a time stamp that says it was signed at a time the key was still valid.

The whole point of a trusted timestamp is that such signature cannot be made for a fraudulent date, otherwise it would be utterly pointless.

This scenario and threat model does not exist if timestamping is correctly implemented.


Nope.

RFC 3161 timestamps - which is what we're discussing here - can be fraudulently constructed for any timestamp value by someone who has the private key for the TSA (Time stamping authority). So what your parent described is easily possible: A system that relies on RFC 3161 timestamps has to trust that

* any cryptographic hash algorithms used remain safe

* any public key signature methods used remain safe

* the TSAs retain control over their private keys for as long as you continue to accept timestamps from that TSA

This is a big ask, and in practice the code signing systems you're probably thinking of just don't care very much. A state actor (e.g. the NSA) could almost certainly fake materials for these systems, we know that this has been done (presumably by the NSA or Mossad) in order to interfere with the Iranian nuclear weapons programme in the past.

You _can_ build a system that has tamper evident timestamping, but it's much more sophisticated and has much higher technical requirements. That's what drives the Certificate Transparency system. CT logs can prove they logged a specific certificate within a 24-hour period, and monitors verify that their proofs remain consistent, the to-be-built Gossip layers allow monitors to compare what they see in order to achieve confidence that logs don't tell different stories to different monitors. But to achieve this a CT log must be immediately distrusted if it falls off line for just 24 hours or if an error causes it to not log even a single timestamp certificate it issued. Massive Earthquake hit your secure data centre and destroyed the site? You have 24 hours to get everything back on line or be distrusted permanently. Bug in a Redis configuration lost one cert out of 4 million issued? You are distrusted permanently. Most attempts to build a CT log fail the first time, some outfits give up after a couple of tries and just accept they're not up to the task.


> can be fraudulently constructed for any timestamp value by someone who has the private key for the TSA

Sure. Which is why these are heavily secured and guarded. Just like the keys for any cert, and highly trusted root certs in particular.

Any private/public crypto system can be compromised if the private keys are leaked. Everyone knows that.

That however is in no way a good argument for not using timestamps.


RFC 3161 timestamps are used because they let people do something Mozilla doesn't care about at all and which was largely irrelevant here.

Alice the OS Vendor wants to let Bob the Developer make certificates saying these are his Programs, she is worried Bob will screw up so his cert needs to have a short lifetime, but her OS needs to be able to accept the certs after that lifetime expires so users can still run their Programs. So, Bob makes certificates and uses Trent's public TSA that Alice authorised to prove they were made when they say they were. Alice only has to trust Trent (who is good at his job) for a long period, and Bob who can be expected to screw up gets only short-lived certificates.

But Mozilla's setup doesn't have these extra parties. There is intentionally no Bob in Mozilla's version of the story, they sign add-ons themselves, so timestamping plays no role. If a 25 year TSA would be appropriate (hint: it would not) then a 25 year intermediate cert would be just as appropriate and simpler to implement for Mozilla.


Oh, the developers are signing their own extensions? I wasn't aware how it worked and I was missing that part, I thought Mozilla signed them on upload (and thus could trust itself to not be malicious).


Well, Mozilla does the signing, but we only know that to be true as long as they control the private key. The whole point of expirations is to make mitigate the risk of an older key being stolen (or cracked).

So yes, Mozilla signs the extensions, but that doesn't change the importance of keeping the private key private... that is HOW we know it is Mozilla doing the signing


How did you get ahold of an expired key?


The whole point of key expiration is that somebody might get a hold of it (or crack it).


Not current time, but time of signing of the executable (not signing of the code sign certificate itself). If you (as the OP suggested) use time of signing instead of current time, the whole point is you’re not using the user’s clock anymore.


Mozilla is already packaging the executable inside an archive. That archive contains the executable, manifest, and any other needed files. The signer can add a timestamp, either as a new file or as a part of the manifest. Then the timestamp is signed along with everything else, and can be checked for validity.


If all extensions are signed by Mozilla’s own certificate, then adding a timestamp won’t do anything for the reason I outlined above and they should just turn off expiration validation altogether. The time of code signing check is only useful if you want to trust one certificate (the developers) to sign things only within the duration noted in the cert, but are willing to trust another certificate (e.g. Microsoft’s timestamp cert) forever. If there’s only one certificate in play owned by the authority for the whole system, then there’s no point in the timestamp at all (for validation purposes).


It depends on the level of security you want.

Let's say you compromise by letting old signatures stay valid, but only for a year. And you rotate the intermediate cert every 90 days.

This system is more secure than the old one, because you only have to worry about key leaks for 15 months, instead of years.

But at the same time, it's impossible to have this giant wave of everything failing at once. Instead the best case is nobody can sign extensions for a couple days, and the worst case is extensions that updated exactly a year ago start to fail in real time.


Ahh, right. Good point. I wonder then if the current system was a deliberate choice over the timestamp method, or just deemed easier to deal with on their end.


Maybe I'm jumping the gun here, but doesn't is seem that this kind of problem is exactly the kind of problem where a blockchain solution would be useful?

Or at least more useful than the 'when all you have is a hammer' style applications of blockchains.


Given that you have a clear authority here (Mozilla) I don't see why you'd query a blockchain instead of querying signatures.mozilla.org

As always Blockchains are a good solution only for a tiny group of problems.


> As always Blockchains are a good solution only for a tiny group of problems.

Flowchart here: http://philippe.ameline.net/images/ShouldYouUseBlckchn.jpg


An OS owns and can enforce policies around the filesystem. For example, a future release of macOS might say that files cannot be both executable and writable, and the only way a file can be marked executable is by the security system validating a signature, or an administrative user override.

Firefox has to deal with malware injecting extensions outside the normal browser process, so I'm not surprised they would default to having periodic re-checks.


One small note on your mention of MacOS, and I don't know if this has been fixed. A couple of years ago, the certificates expired on some versions of the OS installer, like El Capitan and could no longer be verified. While there's the simple workaround of changing the clock, it did give me pause when trying to get an old Mac upgraded to the latest supported operating system.


On a similar note, I remember when Apple had a similar outage in the App Store [1], and a lot of my App Store bought software stopped working, claiming to be damaged. The workaround was to re-download the software again from the App Store, but because Apple doesn't let you download old versions, some of the apps had since updated to a newer macOS version than my machine could handle & I couldn't download them with the new certificates. Some of the smaller indie developers were gracious enough to send me non-signed non-App Store builds that I could keep using, but I did lose some software I'd bought. I haven't bought from the App Store ever since.

Considering Mozilla's certificate failure, Apple's cert failure, and the number of websites I encounter that have forgotten to renew their certificates, it seems like a broken system. Or a really effective form of DRM, I'm not sure which.

[1] https://discussions.apple.com/thread/7336980


How can you tel an app has actually been signed in the past? It could just as well been signed today with a fake date. This defeats the purpose of expiry.

You could just as well have certificates that never expire and just start signing with a new one if you feel like it.

Note the Mozilla ‘solution’ has the same problem: their root now authorized a new certificate with the old public key. If the original expectation was that the private key would be safe for a certain number of years (which is why you have expiry dates in the first place) the unsafe private key is now valid again. This defeats the purpose of expiry.


  It could just as well been signed today with a fake date.
The traditional option (used by things like Java and ActiveX since the early 2000s) is Trusted Timestamping [1], where a trusted third party provides a signature (such as via RFC 3161 Time Stamp Protocol). AFAIK every certificate authority that sells code-signing certificates also provides a free timestamping server. If you trust a CA to issue code-signing certificates, the theory is you'd trust them to timestamp too.

If you don't like having the trusted third party, you can also publish the hash in some write-once medium, such as on the blockchain, in certificate transparency logs, or in the small ads of a reputable newspaper.

[1] https://en.wikipedia.org/wiki/Trusted_timestamping


You don’t trust an authority, you trust a private key that matches a public key you know. The point is that as time marches on the key might leak or be cracked and then your trust is misplaced. So you say the key pair expires and you stop trusting it before that happens.

This trusted timestamping relies on the same signing things with public and private keys process so it just adds a step, it doesn’t solve the problem. The timestamping key needs to expire as well and then it can’t be trusted anymore.


Plenty of the CA root certificates in my browser's trust store have 30 year validity periods. The "keys should have short expiry periods" rule doesn't appear to apply to CAs.

Admittedly, CAs are held to higher standards than most certificate users in terms of keeping keys in hardware security modules and suchlike. So perhaps it's not 100% unjustified that they get longer validity periods.


The code-signing needs to happen through a trusted / certified timestamp server, it's not using the local timestamp. This process basically guarantees that the application used a valid certificate at the time it was signed, and that it wasn't tampered with since then.

With this, the time when code-signed application is executed doesn't matter at all, fake date or not.


Technically I think one must be clear. The cert (client and ca) all have dates and that you are right. But what your miss is that the revocation measure is the key here. It is not just sign it for the period. The CA and the root CA list has to maintain those cert and revoke them. You should not just trust the date and signature. The revocation list is key.

In other words you check everytime you use it basically. That is how stolen Microsoft cert stopped.

All old app are checked for validity. It is not as you implied.


My point of view as a long-time Firefox user that cares about privacy but also knows we live in an imperfect world: It obviously sucks that this happened but I think they handled it very well.

The bug was fixed so quickly that I wouldn't even have realized it had happened if it hadn't been for the thread here on HN. My extensions hadn't even been disabled yet by the time the patch came out. And pushing out the hotfix through studies followed by a new version probably ensured that a large fraction of the "average joe" userbase didn't even realize there was a problem.

So obviously there are some improvements to make for the future but I think some of the criticism over the last few days has been a bit harsh. Firefox is still my preferred browser by far.


> The bug was fixed so quickly that I wouldn't even have realized it had happened if it hadn't been for the thread here on HN

Maybe this is a timezone thing, but I was in East Asia, and I had to deal with the internet for close to 36 hrs (android) with no ublock. It was almost enough to look for a new browser (but browsers with adblock on Android are few and far between - so instead I just didn't use the internet as much for a day or two.). Part of that delay was play store being slow to push it, as I recall seeing the binaries somewhere a while sooner.


There's a couple of options of Android; you could use the DDG browser[0] or the Privacy Browser[1] (which I know sounds dodgy but seems legit). They don't have 'adblock' exactly but I think they implement a lot of the same lists such as EasyList.

[0]https://f-droid.org/en/packages/se.johanhil.duckduckgo/

[1]https://f-droid.org/en/packages/com.stoutner.privacybrowser....


Thanks for those! I usually browse on desktop, so hadn't looked into other browsers, but I'll keep these alternative browsers in mind (and comment history) in case a situation comes up again. The only other instance where I considered switching was the whole forced Mr. Robot addon debacle a year or two back.


or Brave


One option would've been to use Nightly and set xpinstall.signatures.required = false in about:config. That's exactly what I did.


I view using nightly to be as much of a barrier to entry that I might as well switch to a totally different browser. If the issue had lasted longer, I would have found (someone on the internet who had found) a solution like that.


On Android, you already had the xpinstall.signatures.required option without having to install nightly. Linux too. Took me maybe 30 second to fix all my devices.


This depends on your Linux distro. The package maintainer has to set a build flag to allow disabling the signature requirement. This should be set on Debian, and probably distros downstream from Debian, but was not set on Arch Linux last I checked.


Huh, never knew that. I wonder if that's because it's harder for Android apps to edit the settings of other Android apps (unless they have root access, but that's much more rare on Android than, say, Windows or Linux or macOS)?

Mozilla's official Linux builds disallow xpinstall.signature.required = false (last I checked), but the unbranded builds (as well as builds provided by at least some repos) do indeed allow signature bypassing.


This is the first I heard of that solution, and we're what, 5 or 6 days from the issue? I read a number of Reddit posts and the Mozilla blog post and I don't recall seeing this mentioned.

Edit: on second thought, maybe I saw this solution (with or without the mention of nightly) and skipped over it as the Mozilla blog post on May 4th said "There are a number of work-arounds being discussed in the community. These are not recommended as they may conflict with fixes we are deploying."


I use Firefox Nightly as my primary browser on Android. It works fine. You can get it on the Play Store right alongside where you'd get non-Nightly Firefox for Android. I'd hardly call that a "barrier to entry" at all (certainly no more than there would be for, you know, normal Firefox).


> I'd hardly call that a "barrier to entry" at all

Would I have to sign in to sync again to access my bookmarks, logins, etc? Are there ever issues with syncing between phone and computer (nightly to stable) or would I have to change my desktop browser as well? Does anything ever break at all?

Even if the answer to these questions is "no", the fact that I'm asking them is the barrier to entry. And if any of the answers are yes, there's no reason to change from stable branch - avoiding one issue to get different one(s) isn't a solution.

> certainly no more than there would be for, you know, normal Firefox

Yes, indeed, switching to Firefox from Chrome a couple years ago did have a significant barrier to entry


You do need to sign in to Firefox sync on each version if you install both as 2 unique applications on Android don't share each other's data.

On desktop front there is no problem connecting current and nightly to the same sync account and indeed same profile directly.

The sole annoying thing about nightly is that it naturally updates frequently. It's quite stable and gets legitimately useful features faster and let's you disable signing and run locally built add-ons. I use it as my primary browser on Android and Linux.


> Even if the answer to these questions is "no", the fact that I'm asking them is the barrier to entry.

Fair enough. The answers, for the record, are indeed "yes" (but that takes, what, 30 seconds?), "no", and (at least not severely) "no".

But apparently even non-Nightly Firefox for Android supports xpinstall.signatures.required = false, which is even less of a barrier to entry, so that's good news, I guess. While I understand Mozilla's reasoning for not wanting a bunch of people to set this and forget about it, it's a bit ridiculous that not once did they mention it aside from a "don't do this thing that we're not going to specify because it's a hack" (of course it's a hack, and it's one that got me up and running again long before there was even a fix via Studies).


> The answers, for the record, are indeed "yes" (but that takes, what, 30 seconds?), "no", and (at least not severely) "no".

Yeah, the last two would have been the broader deal breakers. The first one is just an issue for me personally - I don't know my sync password. I have it written down at home, but I'm not there right now.


Gotcha; that would indeed be a problem :)


You should give Brave a shot. Seriously, every time I open Chrome on my phone on accident, I'm horrified at what the internet has become..


Why? I use firefox+uBlock on my phone, and don't have chrome installed on any devices.


Because, at least on Android, it's an order of magnitude faster. No extensions needed. I do look forward to Fenix, it's -very- nice, but not stable enough for me to drive it just yet.


Fenix sounds neat, I hadn't heard of it. Thanks.

I have no issues whatsoever with the speed of Firefox on mobile. I also prefer to support a non-chromium browser.


I find it hard to marry up "Firefox user that cares about privacy" with being happy about the study mechanism.

Don't you, from a privacy perspective, find it more than a little disturbing that the study mechanism has so much access to internal APIs in Firefox that it can install certificates without your involvement?

That seems like a crazy security risk, let alone privacy risk. It's built in and enabled _by default_ in Firefox.


It’s funny how the mind works. We tend to accept the far greater risk of allowing automated software updates (either in-app or blindly trusting your package manager e.g. apt-get upgrade) but the Firefox Studies mechanism which does a minuscule, strict subset of what any automatic update can do is somehow a “crazy security risk”.


Normal update channels usually ask me (or let me set it so that they have to ask me) if I want to apply the update and I make sure to at least read the changelog if not skim through the source. In addition to that I trust the Debian team much more than I trust the Mozilla team. The studies in contrast are both for anti-user things such as telemetry and the mr robot thing, are inconvenient to disable, are not transparent (it would be difficult for me to find the source of a specific study), happen silently, and skip the Debian team's judgement.


I love it. You read the changelog and what, you reject the update for some trivial thing you don't like and stay on that previous version forever?

Seriously, look at how studies operates. If you're using firefox, go to about:studies and see for yourself.


No, I reject the update for non-trivial issues which I feel that violate my privacy or could pose some other serious threat and move to some other browser as soon as possible.

> look at how studies operates

It works on the background and without asking you anything. Is that correct?

> If you're using firefox, go to about:studies and see for yourself.

It says "You have not participated in any studies." - probably because I have them disabled.


I chose to run apt-get to install packages. I have never opted-in to, nor even known about, how Firefox was running "studies" on me before this event. It's like finding out the TV you bought last year has a hidden camera and it's been recording you the whole time.


It's not really fair to call it "studies" in quotations. It's better described as software micro-updates which can be progressively deployed in case the change has a regression. They do little more than enable/disable features already shipped in the binary you're running.

In my own browser, if I go to about:studies, I have two Studies running. One is the hotfix for the add-on signing issue. The other is:

  prefflip-push-performance-1491171 • Active
  This study sets dom.push.alwaysConnect to false.
If you equate that to a camera hidden in your TV, you might want to consider giving up whatever drugs you're on.


In case you're curious, that other one is part of https://blog.mozilla.org/services/2018/10/03/upcoming-push-s...

(Disclosure: I work for Mozilla)


In the case of apt-get, there's a release process with signed packages and open source code that the distributions adhere to, publicly visible oversight etc. End users specifically make the choice, either to run the upgrade process themselves, or set up automated upgrades on the understanding of what processes things have gone through, and the ability to verify.

The Studies mechanism occurs silently, running private code, without specific user action, and with "extensive access to Firefox internal APIs".


The Studies mechanism is clear, public and fully transparent. You can see what they've done. You can see what they plan to do. It's all out in the open.

At the end of the day the question is whether you can trust Mozilla. I trust them more than most entities, including many that push changes through apt-get.


Yes, Studies, that mechanism so open that end users didn't know about it until they found they were participating in some random Augmented Reality marketing collaboration with a TV show.

I would love to see a link to the code related to various Studies. So far I've not been able to find any. All you seem to get told is the name of any studies you're in, and only if you go over to about:studies and go looking.

This is not to say I don't trust Mozilla. I do. I trust them far more than I do Google / Chrome. It's hard not to see Studies as a privacy nightmare, though, and the level of power it has is disturbing.


You can see the bugs filed for past and upcoming studies at https://bugzilla.mozilla.org/buglist.cgi?list_id=14712186&re...

When the study involves an addon, I think the bug will link to its code.

(Disclosure: I work for Mozilla)


As far as I understood the installed certificate still has to be signed by a root certificate? So wouldn't it also be valid if served by any other party? Why studies need access to internal apis is a good question, i had studies enabled and the only ones i saw switched a setting...


I guess I am just a bit more optimistic about Mozilla as a company than you might be. Sure they could abuse the study mechanism to spy on people and hack my browser. I just assume that they have neither the motivation nor the lack of ethics to do so. And I have automatic updates enabled anyway, so they can already install whatever binaries they want on my machine.

The other thing is that it's not like there are many other options for me. The only mainstream browsers I could realistically use are Chrome, Firefox and Edge. Out of those I think Firefox probably cares the most about my privacy.


If you're using Firefox, and don't trust them not to collect data you don't want them to, then some "study mechanism" on/off switch isn't going to stop them.

If you don't trust them at least a little bit, you shouldn't be using their software full stop.


It was a couple of days before the Android fix was released. For many users this was a multiday affair. Some lost data (containers).

You were lucky you extentions were not disabled before the fix, but for many people this was a major problem.


I unfortunately lost my container data (luckily I only have a couple set up).

To get the fix though, I had to opt in to the Firefox studies. Apparently I had opted out at some point in the past.


Probably when they stealth installed an extension to promote Mr. Robot without telling you.


I caught that before it was posted in the news, and it really disturbed me to see it. "MY REALITY IS DIFFERENT THAN YOURS" or whatever it said. What's that about, talk about unprofessional. That and removing Live Bookmarks really degraded my trust, this weird Pocket thing (seems like a feature, not a product, and one that should've been built from scratch to maximize uniqueness), and I've been using Firefox and only Firefox since 2002. They're definitely losing me over time, it's the community that saves Firefox. Mozilla enables that but shouldn't really get credit for it.

Currently testing out "Edgium" though, and I can already say it's the one browser that could pull me away.


I'm a relatively recent Firefox user (within the past few months), so that was far before I switched over.


Opt back out if you haven't already.


I don't think containers are supported on Android.

But it did take a few days for the Android fix to be released on Google Play, so it was more annoying than on the desktop.


Handled it very well? That must be the worst hyperbole I have ever seen in a while. It broke extensions suddenly for millions of users. If you went on Firefox.com there was absolutely no news about it. Completely awful lack of communication. You had to find a minuscule banner on the "find a fix" page in order to hear about the problem.

Add to that that the recommended fix was to activate the backdoor of Firefox, thats just a horror story from beginning to the end.

And no admission of guilt anywhere.


> And no admission of guilt anywhere.

> We strive to make Firefox a great experience. Last weekend we failed, and we’re sorry. (...) We let you down

https://blog.mozilla.org/blog/2019/05/09/what-we-do-when-thi...


> My point of view as a long-time Firefox user that cares about privacy but also knows we live in an imperfect world: It obviously sucks that this happened but I think they handled it very well.

I think they handled it terribly… or rather, they handled the same event three years ago terribly. There's a reason this is called armagadd-on 2.0… that cert had expired once already, the previous fix didn't work. I think last time they actually had a week of lead time, too.


They still haven't explained WHY this happened. I know it's easy to overlook a certificate's expiration date, but they should have known about this possible issue when they generated that certificate in the first place.


They were able to push out a fix so fast because they repurposed the "sure, do some studies on my usage" to enable arbitrary changes to the browser that are under the control of the marketing team. And a lot of people were "opted in", thinking they had opted out.

That is 100% not reassuring. Remember Looking Glass?

https://news.ycombinator.com/item?id=15956325

Well, the marketing team has the power to tweak how ads are handled, silently, with no update action from the user.

Out of the frying pan, into the fire.


To contrast, I noticed my extensions were disabled first thing in the morning, and if I had not seen the thread(s) on HN I would have spent maybe a few hours trying to fix.


I had a totally different experience.

I'm still running FF 56.0.2 because I can't live without Tab Mix Plus.

The official line was to wait for the update but about:studies never came up with anything even after 24 hours of waiting and everyone was saying it was fixed but it wasn't for me, I presume because nobody cares about the refugees stuck on pre-add-on breaking versions.

So it was completely broken until I finally found a reddit thread that described how to use the developer console to manually import the certificate extracted from the fix.

That worked, but it enabled all my add-ons, even ones that were previously disabled.

Bloody irritating. I didn't even know that it was possible to break things remotely like this.


You realize, I hope, that 56.0.2 is riddled with security holes at this point? I get the attachment to old addons, but 129 CVEs (including multiple severe memory corruption bugs) affect that version now. It's not really reasonable to expect Mozilla to keep maintaining it.


I don't expect them to maintain it, but I do expect them not to break it for no reason.

Also they could have mentioned in their post that their fix did not do anything for older versions, instead of specifically telling everyone to just keep waiting.

xpinstall.signatures.required = false didn't fix it.

I'll be very happy to update when there's a version that has a good tab manager. I'm on the latest version at home and it regularly loses whole windows full of tabs, even though it is set to restore my session on startup. And there's no way to manually save sessions. It's hopeless.


In this day and age you're living on borrowed time using an old version of a web browser. This state of affairs has its good and bad, but your modern-day browser vendors don't typically maintain branches of old versions to make security and bug fixes (especially when they change things for security reasons, as with this case of old-style addons being removed). If you're going to insist on using an old unsupported version you have to accept the risk of things eventually breaking and the inconvenience of workarounds like the one you described.


Well I've just been educated on a new mechanism by which things can be remotely broken.


Just to be clear: supported version or not, this sucks, and I hope we'll have a fix for you.

I wanted to point out this wasn't remotely broken, however. Even if you had no internet connection, your addons would have stopped working when the certificate expired.

(Disclosure: I work for Mozilla)


Hey sciurus,

I just wanted to chime in way down deep in this comment chain because my thought only makes sense in the context of your comment right here.

I think there may be a special mode of operation of Firefox that may need to be considered here.

You said, "Even if you had no internet connection, your addons would have stopped working when the certificate expired."

This seems like an unfortunate design flaw to me. Consider a Firefox, kitted out with specific set of add-ons setup to the user's liking. Then, the network that Firefox is located on becomes permanently cutoff from the internet and can no longer make contact with the Mozilla mother ship. Maybe it's running in a VM, or maybe it's running in a country with an oppressive regime. I can think of many scenarios where a Firefox would be cutoff.

I think it is a reasonable expectation that the marooned Firefox should continue to run indefinitely without failure. Perhaps the user could be occasionally (monthly, yearly) flagged with warnings that the mother ship could not be contacted, but other than that, nothing should fail.

Please consider this and share it with your teams when the post mortem is discussed.

Thanks! I'm a loyal user since before Firefox.


> I think it is a reasonable expectation that the marooned Firefox should continue to run indefinitely without failure.

I personally agree this is a worthwhile goal. The blog post talks about "tracking the status of everything in Firefox that is a potential time bomb and making sure that we don’t find ourselves in a situation where one goes off unexpectedly." I expect once that is done we'll be positioned to evaluate if and how we could support this.

(Disclosure: I work for Mozilla)


Great! I think you understand me fully. Appreciate the willingness to explore such a strange but interesting freedom-related use case.


Well that's reassuring! :-)

I've fixed it now anyway, in the way I specifically read that I wasn't meant to do.

Maybe you guys could inform us unsupported old version users what we should do instead of waiting for an update that can't come?

It's ok, thanks for your post. I know stuff is complicated and shit happens.

I'm much more disappointed that the latest Firefox doesn't have a working session manager than I am about this mixup. IMO that should be a core browser function.


In case it still helps you or someone else: A fix for Firefox 52 through 56 is available now at https://addons.mozilla.org/en-US/firefox/addon/disabled-add-...

(Disclosure: I work for Mozilla)


Great. Thank you.

Would you mind please telling whoever you need to that we need automatic session saving/restoring to work properly, and then lots of people like me will happily update.


What did you expect to happen? You're using an online service connected to signing authorities. Of course it can be remotely broken at any time, that's how the internet works, it's all remote.


He's using a product. A product. Having it attached to on-line service is not a feature.

I think part of the reason this fiasco stirs up so much emotions is because normal people, including tech professionals, still expect the browser to behave like a product, not a service. And products aren't supposed to randomly break like that, they aren't supposed to ship with a time bomb attached.


> What did you expect to happen?

Not this? I didn't think things would break with updates disabled.

Like I said, now I've been educated on a new mechanism for things to break.


I too am a Tab Mix Plus fan. I was using an ESR version to try to maintain better security while still using my favourite add-on. Now seemingly not possible. I am quite prepared to put up with security vulnerabilities for my personal use of browsers. I use sandboxing/ firewalls / VM's to protect myself. Not happy that an older version of Firefox was broken. I don't expect a company to maintain older versions forever, I do expect them to not deliberately or accidentally break them.

Give me back my Tab Mix Plus!


Related to this: Mozilla has deleted Telemetry data for those users who enabled Telemetry to get the hot-fix [1]

[1] https://twitter.com/firefox/status/1126593558490693632


Edit: a error on my part. Mozilla is deleting ALL Telemetry data collected during that time period, not just from certain users. From the post linked from the tweet [1]

>In order to respect our users’ potential intentions as much as possible, based on our current set up, we will be deleting all of our source Telemetry and Studies data for our entire user population collected between 2019-05-04T11:00:00Z and 2019-05-11T11:00:00Z.

[1] https://blog.mozilla.org/blog/2019/05/09/what-we-do-when-thi...


I know Mozilla is not your typical for-profit organization, but it is so nice to see "We failed and we are sorry." It felt good. This post-mortem read like a genuine story of what happened and how they solved the problem. No lawyer language. No bullshit.

I love Mozilla.


Thanks for sharing this. I didn't know they were doing that. This helps restore some of my faith.


Interestingly, it seems like it's not just that they didn't realize the certificate expire date was zooming past; they didn't even realize that particular certificate expiring was a thing. If they had noticed 6 months ago it was about to expire -- they still would have had to figure out what to do about it, there wasn't an actual documented procedure in place to swap in a new non-expired cert without disruption to FF users with existing signed add-ons installed. If they had known the expire date was upcoming, they just would have had more time to figure out what to do about it in a leisurely fashion.

I wonder if the engineers who implemented the original system had a clear idea of what would be done when the cert expire approached and it just never got documented (or FF accidentally changed to make it no longer applicable?), or if they just figured they'd figure that out years later when the date approached.


I would've liked to see some reflection or even just acknowledgment about the fact that they intentionally disabled the "xpinstall.signatures.required" setting on Windows and OSX. I hope it's at least in the formal postmortem.


That's covered at https://blog.mozilla.org/addons/2015/04/15/the-case-for-exte..., which is linked from ekr's post. I encourage you to read the rationale as a whole, but the specific question you're asking is addressed here:

> Many developers have asked why we can’t make this a runtime option or preference. There is nowhere we could store that choice on the user’s machine that these greyware apps couldn’t change and plausibly claim they were acting on behalf of the user’s “choice” not to opt-out of the light grey checkbox on page 43 of their EULA. This is not a concern about hypotheticals, we have many documented cases of add-ons disabling the mechanisms through which we inform users and give them control over their add-ons. By baking the signing requirement into the executable these programs will either have to submit to our review process or take the blatant malware step of replacing or altering Firefox. We are sure some will take that step, but it won’t be an attractive option for a Fortune 500 plugin vendor, popular download sites, or the laptop vendor involved in distributing Superfish. For the ones who do, we hope that modifying another program’s executable code is blatant enough that security software vendors will take action and stop letting these programs hide behind terms buried in their user-hostile EULAs.


Does Windows et al really not provide a mechinism for saving privileged settings in a tamper-resistant way? I frankly find that hard to believe. How does other software solve this problem?

There are of course always workarounds on an open platform like Windows/Mac/Linux, but the threshold isn’t “impossible”, it’s just “as difficult as injecting into the browser’s code.”

Edit: For example, what if the config file contained a checksum of the file's contents + the user's hardware? If the setting is changed by the user within Firefox, the checksum is updated and everything works—if the checksum is invalid, settings are reset to the default.

Video games will occasionally do this type of thing with their config files. Modders often figure out the formula—but again, the idea here isn't to make editing the file impossible, it's to make it as difficult as injecting into the Firefox executable.


I frankly find that easy to believe. After all, operating systems give desktop apps a lot of trust because they trust these desktop apps to respect the users' trust. It is very recent that we now have desktop apps running potentially untrusted code that can subvert the trusted desktop app.

Of course, there are ways but just none are convenient. Generally the well established security boundaries are between the operating system and the user, and then between different users. Almost all the ways I can think of involves Firefox temporarily elevating privileges, which is undesirable.


Windows starts to have something but it is only available in last versions of Windows 10, probably not enabled by default, and the model might not be exactly what is needed anyway.

I don't even remember what it is supposed to protect against precisely -- and MS does not do a lot of communication about it. They maybe don't officially consider it a security boundary (same as UAC). Mozilla seems to be concerned about attackers having admin privileges (but that are still shy of just plainly hacking the Firefox binary or something crazy like that): quite hard to defend against that... and if you can, that means the user actually has lost some control over its own computer, which is a situation which has its own issues.


I mean, the Windows UAC and the MacOS password prompt that elevate user access, but people often click through those. I can see their argument here.


No amount of security measures can stop users from being tricked by social engineering attack into self-pwnig themselves. At some point we need to stop taking control away from users, because this road ultimately leads to turning computers into cable TV.


macOS's System Integrity Protection, aka rootless, means that, yes, macOS has a way to store settings that even root can't normally touch, though I suspect only Apple can make proper use of that right now.

But Windows? A sadder story, I think.


So that means going forward if Mozilla don't want you to have an add-on then you won't be able to enable it?


Unless you run an unbranded builds ( https://wiki.mozilla.org/Add-ons/Extension_Signing#Unbranded... ), or the Developer Edition ( https://www.mozilla.org/en-US/firefox/developer/ ), where setting xpinstall.signatures.required to false still works, then yes, Mozilla can prevent you from using an add-on.

More than "going forward", this has been the case for a while now. It's been long enough ago that they disabled that setting in stock Firefox that I don't remember exactly when it happened.


Sadly the unbranded builds do not have updates enabled and the developer edition is basically beta.


Wait, why the heck don't the unbranded builds auto-update? To what purpose?


We do not take an editorial stance on add-on content with regard to signing, but we do have the ability to block add-ons that are malicious or which violate user privacy and security. We hope those choices will be few and far between, and that our users will agree with them.

Our Add-on Policy is discussed further at https://blog.mozilla.org/addons/2019/05/02/add-on-policy-and...


FWIW, I was genuinely shocked that this could happen, and it has severely damaged my trust in Firefox.

I am extremely wary, having been bitten more than once, about software that automatically updates itself or is otherwise subject to remote interference. I don't run Windows 10. I don't use Chrome for anything important. I avoid subscription-based or activation-required software as much as possible. And in Firefox, I specifically chose to be prompted to install updates (which I usually do immediately, but it's my choice and I can do a quick search first in case there ever is a problem being widely reported).

I find the argument that it's impossible to make this configurable because malware could then circumvent it very weak. If we're talking about that level of interference, anyone with access to the Firefox executable could in theory replace it, and given the open source nature of the Firefox codebase this wouldn't be particularly difficult technically for anyone willing to go to such lengths in the first place.

In any case, even if the argument about hard-coding protection into the executable did stand up to scrutiny, there are alternatives possible instead of retrospectively disabling addons with no possible workaround. Perhaps most obviously, you could show a warning message and require explicit user approval at startup before activating the addon, for example, as is already done with various other useful features that are also potentially open to abuse.

As things stand, far from protecting our privacy and security from malicious addons, the current system in fact deactivated all of our privacy- and security-protecting addons, without warning, right in the middle of browsing sessions. One of these seems to be very much worse than the other in terms of the risks created, and I urge you to consider that when deciding how addons are treated in the future.


> I find the argument that it's impossible to make this configurable because malware could then circumvent it very weak. If we're talking about that level of interference, anyone with access to the Firefox executable could in theory replace it, and given the open source nature of the Firefox codebase this wouldn't be particularly difficult technically for anyone willing to go to such lengths in the first place.

As mentioned in GGP's quote, Firefox was specifically modified by popular software by Fortune 500 companies and laptop companies, which feel safe enough to modify user preferences, but not to replace the Firefox executable. This does specifically fix that real-world attack vector.

> In any case, even if the argument about hard-coding protection into the executable did stand up to scrutiny, there are alternatives possible instead of retrospectively disabling addons with no possible workaround. Perhaps most obviously, you could show a warning message and require explicit user approval at startup before activating the addon, for example, as is already done with various other useful features that are also potentially open to abuse.

I think that would be categorised as an option "that these greyware apps [could] change and plausibly claim they were acting on behalf of the user’s “choice” not to opt-out of the light grey checkbox on page 43 of their EULA".

> As things stand, far from protecting our privacy and security from malicious addons, the current system in fact deactivated all of our privacy- and security-protecting addons, without warning, right in the middle of browsing sessions.

That this happened was a risk, and they're taking active measures in the future. The other was a certainty, and this risk was the active measure they were taking against it.


I think that would be categorised as an option "that these greyware apps [could] change and plausibly claim they were acting on behalf of the user’s “choice” not to opt-out of the light grey checkbox on page 43 of their EULA".

Sorry, but I don't really see how. We've been using click-to-play safeguards on embedded content for years, and they have proved highly effective at stopping abusive or outright malicious content in Flash, Java applets, etc. Why couldn't a similar safeguard be used to isolate untrusted addons but still give users the option to override and run them in the current session if they really do want to? I don't see why such a mechanism would be more vulnerable than any other hard-coded browser behaviour, including refusing to run those addons at all. If you made the behaviour configurable via a persistent setting then obviously that could be subject to external modification, but you don't have to do that here.

That this happened was a risk, and they're taking active measures in the future. The other was a certainty, and this risk was the active measure they were taking against it.

I respectfully disagree with this stance. That malicious sites are actively compromising user privacy is also not a risk, it is a certainty. That addons to block unwanted content have stopped malware from exploiting browser vulnerabilities and infecting user systems is also not a risk, it is a certainty. It would take substantial evidence to convince me that the risk from greyware apps was really greater than the risk of privacy and security invasions across the entire Web.


> Sorry, but I don't really see how. We've been using click-to-play safeguards on embedded content for years, and they have proved highly effective at stopping abusive or outright malicious content in Flash, Java applets, etc.

But other software on the user's computer wasn't trying to work around those safeguards. That's the main attack vector, as I understand it.

> That malicious sites are actively compromising user privacy is also not a risk, it is a certainty.

Absolutely, and as far as I'm aware that's also something that Mozilla's actively taking measures against.

> That addons to block unwanted content have stopped malware from exploiting browser vulnerabilities and infecting user systems is also not a risk, it is a certainty.

Sure, in hindsight it is, but I wouldn't have predicted a week ago that it was about to happen, and as far as Mozilla is able to predict future occurrences they are taking measures against it as well.


I still don't see how this is as complicated as these arguments suggest.

There is a known risk of Firefox being compromised by malicious addons, including those preinstalled by certain organisations. This risk is what is moderated by requiring addons to be signed and hard-coding a block. However, moderation is all this gains, because anyone who is preinstalling Firefox on a computer could still install a modified executable instead.

There is also a known risk of the user's security or privacy being compromised by visiting malicious websites that exploit weaknesses or vulnerabilities in Firefox. This risk is what is moderated by addons that block or otherwise interfere with undesirable content. It doesn't take any sort of hindsight to anticipate this; it is one of the major reasons people advocate blocker extensions, and this has been true for many years.

It is understandable that Mozilla would want to disrupt the former threat, but as I and others have explained, there are tried and tested ways they could do so that are no more vulnerable than the current approach yet would not suddenly remove all protection offered by addons against the latter threat without warning in the middle of a browsing session. The current heavy-handed approach is like building a secure home by making a concrete bunker with no doors and windows: the efforts to secure the addon system ultimately rendered the entire system useless.

Worse than that, though, the current strategy violates the basic principles that attract some users to Firefox in the first place, specifically its extensibility through addons and its relative respect for users' privacy and control of their own systems. The fact that Mozilla have so far shown little understanding of why some users would have a problem with this is regrettable, but perhaps they will come around with further thought after the event. However, the fact that there are people here still trying to defend the policy despite the highly visible train wreck that just happened seems very odd to me.


> However, moderation is all this gains, because anyone who is preinstalling Firefox on a computer could still install a modified executable instead.

Well, apparently that is a line that vendors are not prepared to cross.

> There is also a known risk of the user's security or privacy being compromised by visiting malicious websites that exploit weaknesses or vulnerabilities in Firefox.

When it comes to actual weaknesses or vulnerabilities, it seems clear to me that Mozilla should not rely on add-ons for patching those. But yes, blocker extensions still provide value; luckily, they are also still allowed.

> as I and others have explained, there are tried and tested ways they could do so that are no more vulnerable than the current approach yet would not suddenly remove all protection offered by addons against the latter threat without warning in the middle of a browsing session. The current heavy-handed approach is like building a secure home by making a concrete bunker with no doors and windows: the efforts to secure the addon system ultimately rendered the entire system useless.

You've said this before, so to prevent getting into a loop, I won't repeat my response :)

> Worse than that, though, the current strategy violates the basic principles that attract some users to Firefox in the first place, specifically its extensibility through addons and its relative respect for users' privacy and control of their own systems.

This I understand, and I wish it wasn't necessary too. I do think Mozilla has not shown little understanding - they've repeatedly explained how they are caught between a rock and hard place, and reached a different conclusion than you did, after weighing the pros and cons. That does not mean a lack of understanding of the cons, but merely that they did not outweigh the cons of the alternatives in their view.

This might simply be the result of different valuations of the pros and cons between you and Mozilla; given the amount of data and insight Mozilla has on the use of Firefox, I would also suggest to be open to the idea that there might be a lack of understanding on our side about the scale of the problem of malicious extensions.


In reality the issue of Fortune 500 companies attacking users and hacking their computers should be addressed legally too.

Are Mozilla currently helping any organisations to sue these companies?

Is there more detailed evidence provided on this somewhere? Like which companies and exactly what they did?


I don't know about that, sorry, but the wording does seem to imply to me that the legality of changing configuration options was covered by a EULA or something, and replacing the executable would not be.


Hear hear. Very well written. I too was shocked that this was possible.


>or take the blatant malware step of replacing or altering Firefox

Firefox is still considered open source, correct? From what I know, open source software is meant to be altered.


They mean altering the compiled executable without the users knowledge.


It might not have helped here, but updating certs every several years always leads to problems from my experience - people move on, processes get lost or outdated, etc. I'll accept the yearly annoyance to avoid those issues every time.


Agreed.

My initial reaction when Let's Encrypt had you re-issue every 90 days was negative, but I was wrong. Very wrong. A 90 day re-issue forces you to have working re-issue infrastructure and procedures, and therefore you're less likely to get stung by an accidental expiration.

Long expirations are a trap, a very easy trap to fall into.


Why not update long-duration keys every 90 days? That way you're never close to expiration, ever, best of both worlds.


Because if you make it optional then nobody does it (except you) and then you're back to square one. By mandating a 90-day expiry, LetsEncrypt forced people to automate the process -- and everyone is on the same page.

It should be noted that most LetsEncrypt tools will renew a certificate when it is 30 days from expiry, so if you run the renew script every week (or day) you're also never close to expiry.


>By mandating a 90-day expiry, LetsEncrypt forced people to automate the process -- and everyone is on the same page.

Ironically, I had the opposite problem. I used to be on top of things like cert expirations, but now I just let certbot do everything. The problem (at least in my case) was that even though certbot updated the cert on time, it doesn't restart / reload nginx so that it picks up the new cert. My site was up for the full ~30 days between the renewal and the expiration of the old cert. So my site went down because of letsencrypt's cert renewal policy.

(I now have a script set up that reload's nginx's configuration whenever the certificate is updated.)


In addition to what the other user said: if your certificate expires after a few years, it may take you a few years to notice it. And by then you probably don't know how you set up the not working process or don't even work at the company anymore.


Usually organisations of size of Mozilla have metrics and alarms for various things, cert expiry should have been on top priority.


I’ve never worked in a company so large that certificate expiration was a completely solved problem. I’ve seen it bite every company I’ve worked for, and I’ve been part of ones much larger than Mozilla.


Even with annual certificate renewal, I'd find I'd make mistakes and/or that things had changed in the intervening time.

It's also infrequent enough to often remain a manual process when it really should be automated.

That said, I still haven't automated renewal of my personal ones using LetsEncrypt, but it's also so simple I don't really need to.

At any of my previous jobs where I've had to deal with certificates, a 90 day cycle would have guaranteed I'd have it down to at least just "run this script"


If anyone's wondering if this answers the Actual Question of why the cert was allowed to expired, don't waste your time, it doesn't. I guess implicitly that's a "social detail"?


What are you after?

This clearly says they screwed up and that internal procedures need to be changed so it cannot recur. If this was in fact an oversight/accident, what other kind of explanation can they give?

> This was due to an error on our end: we let one of the certificates used to sign add-ons expire

And:

> We clearly need to adjust our processes both to make this and similar incidents it less likely to happen and to make them easier to fix.


Well, they could tell us what the previous process was. Was the cert renewer on vacation, or was the wrong date entered on the calendar, or? How was it supposed to work? I think a lot of people are curious about that.


They said they will make a formal post-mortem investigation as well as make a post about what they will change. I think the information will be contained in one of the two posts?


Sure? But someone basically said there's nothing more to be learned or discussed.


It's understandable to be curious about that, I am myself. But I don't think they owe us that. It might be sensitive information, anyway.


The post-mortem hasn't happened yet. ekr's post is a preliminary description of what happened, everything he felt confident saying in advance of the post-mortem and so necessarily heavy on technical details but light on process details.


I mean, it's fair to say it's oversight? I wonder if the post-mortem will reveal if it was on someone's radar and slipped through the cracks or if it was never put into the schedule system they use for certification expiration to begin with.


Yes I saw that, I just wish he had mentioned this at the top of the post (which currently reads " I wanted to walk through the details of what happened, why, and how we repaired it.") instead of near the bottom.


I really don't get why everyone thought putting dead-man-switches ON EVERYTHING was a good idea. Boils my blood thinking about it. So wasteful. Such disrespect for the future. But hey, security. Can't argue with that!


Subscription culture and endless updates/rewrites. For Sisyphus, nothing is ever complete.


> For the other groups we are developing a patch to Firefox that will install the new certificate once people update. This was released as a “dot release” so people will get it — and probably have already — through the ordinary update channel. If you have a downstream build, you’ll need to wait for your build maintainer to update.

Why not link to the xpi that can be installed now?


> Why not link to the xpi that can be installed now?

This is the crux of my remaining frustration with how Mozilla handled this issue. That XPI should've been front-and-center in all the articles that detailed the fix. And yet, instead of something like...

"If you have Studies enabled, a fix should apply automatically. If it hasn't yet, or if you have Studies turned off (or are using a version which does not support Studies), you can install the hotfix add-on [here](URL to XPI)."

...pretty much all the official messaging ended up like so:

"If you have Studies enabled, a fix should apply automatically. It may take up to 6 hours; please be patient and wait for it. If you don't want to (or can't) turn on Studies, you're SOL until we push out a point release (and further SOL if you're at the mercy of a Linux package maintainer or you want to use a version of Firefox that still supports XUL-based addons)."

The notion that this was a deliberate ploy to get more people to turn on Studies is surely conspiracy-theorist mumbo-jumbo, but nonsense like this makes me wonder.


> The notion that this was a deliberate ploy to get more people to turn on Studies is surely conspiracy-theorist mumbo-jumbo, but nonsense like this makes me wonder.

This is not the case. Please see my response downthread: https://news.ycombinator.com/item?id=19872490


I know full well it's not actually the case, but that doesn't make it not feel like it could be the case. It feels scummy, and I'd expect Mozilla to be above that scumminess.

Like, just link to the XPI. Not that hard. The unexplained reluctance to do so is suspicious.


Gotcha. Manually installing the hotfix XPI makes cleanup a bit harder now that we have a proper fix. E.g., without coming from Studies, there's no study to ever end. Direct installation also makes it harder to quickly respond to any bugs we might discover in the initial revision of the hotfix.

Now that we have a stable fix, we will publish an XPI with the option of direct installation for users of older, unsupported versions of Firefox (all the way back to 52) who have opted out of automatic updates.


I see. Some follow up questions:

> Manually installing the hotfix XPI makes cleanup a bit harder now that we have a proper fix. E.g., without coming from Studies, there's no study to ever end.

The language around enabling Studies for the hotfix also claimed that once the hotfix installed, one can feel free to turn off Studies. Could similar language not have been included for the XPI approach (e.g. "once the fix is applied, you can uninstall this add-on")? Or is this a case where the extension does have to be installed (at least until the user upgrades to a point release with a fixed certificate)?

Alternately, do extensions have the ability to uninstall themselves? If so, then perhaps the extension could install the new certificate and immediately uninstall itself (or, in the "extension has to be installed for the fix to exist" scenario above, uninstall itself if it detects itself running on an updated Firefox and/or flag itself as incompatible with Firefoxen newer than the latest affected version)?

Alternately, is there no way for Firefox itself (e.g. in a point release) to explicitly blacklist an extension?

Alternately, is it possible to revoke the certificate/signature for that extension such that Firefox deems it invalid and disables it (using, presumably, the same mechanism and rationale as what caused this particular bug)?

Seems like this is a problem with multiple potential solutions besides "just do it as a Study". Even if it really is/was unsolvable, I feel like power users would be perfectly happy with getting the quick fix in exchange for subsequent cleanup being on them; ain't ideal, but it's better than waiting for multiple hours for Studies to work its magic.

> Direct installation also makes it harder to quickly respond to any bugs we might discover in the initial revision of the hotfix.

I'm sure there are some people out there who would be happy to test the XPI while having Telemetry enabled so y'all can get all that juicy fresh debugging data :)


Complexity of test scenarios. There were already ~6 user states on ~20 supported operating systems with ~10 add-on types. There has already been public announcement of a solution for older versions of Firefox without studies including a manual install option.

> For users who cannot update to the latest version of Firefox or Firefox ESR, we plan to distribute an update that automatically applies the fix to versions 52 through 60. This fix will also be available as a user-installable extension. - https://blog.mozilla.org/addons/2019/05/02/add-on-policy-and...


Your intentions are fine, but your methods scare the hell out of us. Thanks for deleting the data, but I'm mad you were collecting it at all. I've clicked "opt out" hundreds of times now. It's not fun anymore.


We used to call this "botnetting".


FYI that should read "we have developed" instead of "we are developing". Firefox 66.0.4 and later have the patch.

(Disclaimer: I work for Mozilla)


> users should be able to opt-in to updates (including hot-fixes) but opt out of everything else

Finally some good news. This is what I suggested in one of the previous threads: there should be a delivery channel for important updates, and a channel for experiments/telemetry/whatnot. Some other HNer said it was an unrealistic expectation "because manpower". Guess what, it isn't. This is how things should always be.


They did have that capability (see https://wiki.mozilla.org/Firefox/Go_Faster/System_Add-ons/Pr... ), at least the linked repository had commits in 2016. I can't tell from a 30 second search why that no longer works, just that it is a replacement of a similar previous capability.


This started before my time at Mozilla, but I think several different systems grew out of the "go faster" initiative. Of them I believe the fastest and most flexible means of shipping changes is Normandy, which is primarily used for our shield studies. Like the blog post says, we need to revisit this.

(Disclosure: I work for Mozilla)


The Firefox update required an administrative login in my windows system at work, which I don't have have. Normal updates haven't required that. So far I've just left it broken and it keeps prompting me for an administrative login on launch.

The article doesn't explain why elevated privileges are required to apply the update.


Does the current user have rights to write to the Firefox install location? If not then elevation is required to overwrite the files.


The current user may not need to have those rights if the maintenance service is installed, which allows the updater to run with higher privileges. I don't know of any reason why the recent updates would be any different, though. There may be an unrelated issue that broke "silent" updates for drtillberg.

(I work on the Firefox updater and am a Mozilla employee)


If normal updates do not require admin perms then it must not have been installed in a system location, and they must be doing something system-level with this particular update.


My heartfelt condolences to everyone who has had to browse the internet without an ad blocker.

Nobody deserves that.


If nothing, this outage reminded me how valuable my adblocker really is. I got interrupted three times while watching a podcast, incredibly obnoxious.


This is why you should have multi-tier defenses. An ad blocker in the browser, blackholed domains in your hosts file, and use DNS servers that also blackholes ad domains.


I used to do all 3 in the past, 4 with domain blocking done in router configuration, and am no longer a fan. It made troubleshooting any issue that may pop up too much of a hassle. Now I just use single-source blocking at the application layer. When Firefox disabled extensions, I simply used Edgium until they had it fixed.


I forgot I had the hosts file, and was surprised by how little ads I was seeing when the add-on was disabled, until I realised the reason.


> In theory, fixing a problem like this looks simple: make a new, valid certificate and republish every add-on with that certificate. Unfortunately, [..]

I'd expect addon usage to follow a pareto distribution. Thus resigning the most important ones would have helped a lot of users. Why didn't they start going this route anyway? Not enough manpower for this to not diverted resources from the other more important fixes?


I think this is covered in the blog post, but I'll take another stab at explaining it from my perspective:

Republishing was one of the options we were investigating early on. However, the problem is that it only fixes things once you check for and install the updated version signed with the new certificate. Firefox would still have disabled your installed version that had an expired certificate.

Firefox checks for addon updates every 24 hours, but it checks in with Normandy every 6 hours. Thus once we had stopgap fixes shipping to users via Normandy, and a proper fix of a new Firefox release in progress, republishing addons wasn't necessary.

(Disclaimer: I work for Mozilla)


Thanks for your response.

An update being deactivated doesn't trigger an update check for said addon? Well, complexity of a graceful update attempt for a not loadable addon probably outweighs the benefit of such rare cases.

But the main reason I had a hard time with you guys discarding this path is probably addon stats like [0]. The spike in downloads made me think it could have helped at least some users. But on second thought, it might also have been caused by extensions getting re-enabled post fix. 2m downloads at 4m DAU makes me guess this was caused by something automatic.

[0] https://addons.mozilla.org/en-US/firefox/addon/ublock-origin...


This is a good point; I'll try to make sure it's raised when we conduct our postmortem.

(Disclosure: I work for Mozilla)


I expect the full postmortem will dive into this in more detail, but one reason mentioned in ekr's post is that many add-ons are distributed independently, outside of AMO. So while we could re-sign those, we're not in a place to re-distribute them. Instead, "users would have to manually update any add-ons that they had installed from other sources, which would be very inconvenient."


>An important feature here is that the new certificate has the same subject name and public key as the old certificate, so that its signature on the End-Entity certificate is valid.

Shouldn't it be impossible to generate a new cert (with a different expiry date) that ends up having the same public key as an existing cert?


> Shouldn't it be impossible to generate a new cert (with a different expiry date) that ends up having the same public key as an existing cert?

If you have the secret key of the original certificate, you can use the same key material, and just use different meta data (like expiry date).


No, one public/private key pair can be used to generate or sign as many certificates as you like, as often as you like.


No I'm not talking about the root, I mean they generated a new certificate (the intermediate) (with a new private key) that had a public key identical to an existing certificate -- you shouldn't be able to do this, public keys can't be "specified" afaik, they're derived from your private key and the signer's public key.


I think you're perhaps mixing up the public key on the certificate and the signature on the certificate.

The signature is over the contents of the certificate, so the certificate cannot change without the signature changing.

The public key, though, is just part of the arbitrary information that the certificate is intended to secure. Much like multiple certificates can be issued with the same subject name (but varying other details), multiple certificates can be issued with the same public key.

For example, in the TLS context, you can produce an unlimited number of CSRs (certificate signing requests) off of the same private/public key pair used for TLS. It's a common practice to generate a new private/public key pair every time you generate a new CSR in order to mitigate potential compromise of the private key, but even that practice is becoming less common because changing the public key prevents pinning it using e.g. HSTS - today, some clients establish trust by verifying the public key against both the certificate and some other separate method (often TOFU). This is a separate practice used alsongside certificate verification intended to mitigate some of the security concerns around the certificate infrastructure.

In this case, as in Mozilla's case here, it is a practical requirement to issue a new certificate with the same public key, because clients expect the public key to remain constant for various reasons.


AFAIK the public and private keys of any keypair, like the intermediary, are always linked. You can not have a new public key with the same private key.

But you can always re-sign the same keypair from the root with a new, non-expired certificate. And since this keypair (the intermediary) signed all the individual add-ons to begin with, it will just magically work.

Remember that certificate is nothing but a message that is signed by some "higher" keypair, which says that some "lower" keypair is trusted.


The literal ascii blob of the signed key would be different, because (for example) the "Valid Until" date has changed — but they control the private keys for the signer and the intermediate, which means they can issue a certificate that is logically identical from an X.509 perspective other than fields that either time or randomness contribute to. (Serial number would not necessarily be randomized in this scenario.)


Isn't that the point of having the root cert and the intermediate? Otherwise, why not just have everything keyed to the root?


Because a root typically has a longer expire date and thus has to be stored extra secure and handled extra safely. You'd usually store it in some hardware security module that is operated by an airgapped system or something like that that requires physical access at the very least. Replacing the root often requires shipping new versions of your software that disables the old root and bakes in a new root.

If an intermediate certificate becomes compromised, you can revoke it and issue a new intermediate certificate with your still secure root without the need to push out new binaries.


For anyone who misses the old days when a browser only did what you told it to, here is a solution: GNU icecat is a nice firefox esr fork with the mozilla call-home bits all turned off by default. It's very pleasant to use.


There's also GNU "abrowser", which (though terribly named) has the advantage of being based on the latest Firefox release rather than esr. It has the same privacy defaults as Icecat (though I believe it doesn't ship with the addons that Icecat does).


It doesn't have the new Quantum rendering engine though, right?


I'm pretty sure it does. https://en.wikipedia.org/wiki/Quantum_(Mozilla) says quantum shipped in 57.


There is a related blog post about what Mozilla is doing with the data collected from users who enabled Studies in order to get the hot fix.

https://blog.mozilla.org/blog/2019/05/09/what-we-do-when-thi...

TL;DR is "In order to respect our users’ potential intentions as much as possible, based on our current set up, we will be deleting all of our source Telemetry and Studies data for our entire user population collected between 2019-05-04T11:00:00Z and 2019-05-11T11:00:00Z."

(Disclaimer: I work for Mozilla)


As someone who reluctantly re-enabled Studies to get the fix, I appreciate this.


> Second, we immediately pushed a hotfix which suppressed re-validating the signatures on add-ons.

What’s the point of re-validating of installed add-on in the first place then?


> Note that this is the same logic we use for validating TLS certificates, so it’s relatively well understood code that we were able to leverage.

Don't know whether to be happy or scared that the TLS validation code is "relatively well understood" :D ! I assume it's just a sub-optimal choice of phrasing.


I’m scared that they (still?) confuse regular HTTPS TLS-validation with code-signing.

These are two entirely different domains which follows entirely different rules.

Current Firefox behaviour is still broken, even after the “fix”.


Certificate expiration, if only we can figure way to see it coming. Oh wait, metrics!


This is basic code-signing and “time-stamping” is not mentioned once in the post-mortem, despite lack of that was exactly what caused this issue in the first place.

That’s not inspiring a lot of confidence, to be honest.


I don't understand one thing. How were they able to generate a new certificate with the same subject, a different expiration date, and the same signature.

I assume I missed something in reading this.


It's a new Date but still the same key. So the signature of the root certificate for the intermediate certificate is different, but the signature the intermediate key generates are the same.



> Even on Monday, we still had some users who hadn’t picked up either the hotfix or the dot release, which clearly isn’t ideal.

My kubuntu boxes had the patch wednesday, not before.


THIS IS WRONG: Preserved to make below reply not seem out of context.

> Second, we immediately pushed a hotfix which suppressed re-validating the signatures on add-ons.

Wait, that's not right is it!?

The only hotfix I'm aware of (and I was following this pretty closely, as you might guess by my dozens of comments on the topic) was the addon installed via the studies system.

That addon didn't suppress the re-validation of signatures, it installed a new certificate and then triggered the re-validation of signatures immediately. It left the validation that happens on a 24 hour cycle alone.


There was a hotfix before the one that installed a new cert:

> hotfix-reset-xpi-verification-timestamp-1548973•Complete

> This study sets app.update.lastUpdateTime.xpi-signature-verification to 1556945257.


Oh, you're completely right. Oops.


What I don’t see mentioned anywhere is why the heck was this a last minute scramble? Don’t they know when their certificates are going to expire!?


Probably not. They probably know when most of their certs schedule but likely weren't monitoring this intermediate code signing cert (which is different than a TLS/webserver leaf/intermediate cert).

It seems super simple to do, but in practice IMO is harder than it seems. Most major cloud providers have been hit by at least one cert expiry causing an outage in the past year... Hell, likely in the past month.

This doesn't surprise me at all, certs are hard.


i lost all my custom multi-account containers after getting a version that has the cert fixes (firefox is still broken on fedora; i had to download the testing version today to get it back working).

either today's engineers are sub-standard or the foxes rule the henhouse.


If you disable the addon, restart Firefox, and enable the addon, do your custom containers get erased again?


It's my understand that they'd be deleted.

Current bug for issue: https://bugzilla.mozilla.org/show_bug.cgi?id=1549204

This was discovered after the expiring caused a similar process to kill data.


I hope this was not a government attempt to snoop and that mozilla did not miss any detail...


So my first thought was that a lot of ads would have been printed during that 24 hrs? Anyone notice a blip in revenues of sorts?

I mean that's the most crucial add-on to ever work: ad blockers.

When I visit a site with ad-blocker off I feel compelled to clear all caches and cookies and go have a shower.


None of the ad companies are going to publicly talk about that data. They might have seen a blip, but I doubt it was anything major. Firefox doesn't have the market share it once did either.


Catalog of classic Firefox add-ons created before WebExtensions apocalypse https://github.com/JustOff/ca-archive


No problem for me. The only add-on I use and recommend is Pocket. Pocket helps me keep my stuff organized, when im on the Go.


I switched to falkon and hope to never go back to firefox.


I love how they paint the picture that certificates "unfortunately expired" as if it were an act of god or something. Surely one cannot see it coming! No mention at all why nobody was checking certificates expiry.


The post seems to imply that it was a simple overlook (which is frequent when every such thing is not formally tracked). I agree that it is hard.

> We’ll be running a formal post-mortem next week and will publish the list of changes we intend to make, but in the meantime here are my initial thoughts about what we need to do. First, we should have a much better way of tracking the status of everything in Firefox that is a potential time bomb and making sure that we don’t find ourselves in a situation where one goes off unexpectedly. We’re still working out the details here, but at minimum we need to inventory everything of this nature.


> a simple overlook

I'm sorry but when your business is to enable 30 000 extensions to work your business is also to check that certificates that enable such extensions don't fail tomorrow. That's the core of one's job, not even a side project or something.


I don't claim to know what actually happened, but one possible cause is that the HSM (required for issuing and renewing intermediate certificates, mentioned in the post) might require a right person and right schedule to operate. The use of an HSM means that you don't normally use it to issue certificates and you should only touch it a few times a year. As a result there may have been only a few people with an effective knowledge of the HSM; when they can't be scheduled for some various reasons, well, it may be forgotten. While it of course sucks, something like this happens anywhere anytime.


> Second, we need a mechanism to be able to quickly push updates to our users even when — especially when — everything else is down.

They should simply streamline the "normal" updates for that purpose, not invent the new "channels."

Specifically, not all binaries in the directory should be changed just to push an update where only few lines of the code are different.


> We clearly need to adjust our processes both to make this and similar incidents it less likely to happen and to make them easier to fix.

I knew it. They are going to use THEIR fuck-up to justify why they need even more remote control and access to firefox installed on users machines.


> because they want to run old-style add-ons, but many of these now work with newer versions of Firefox.

Except it wasn’t working for this few days. Anyway older versions of Firefox wasn’t signing add-on, so they are not concerned by this fix.


Wrong. I lost my addons in firefox 60 esr, which I got from debian. I'm done with mozilla disrespecting my preferences, and have switched to waterfox and icecat. Remote control of the browser is simply the last straw.


Do they follow basic PKI best practice. Do they actually know (not after the fact) the certification path validation algorithm. It shall be auto.

Is Firefox use the normal PKI authentication mechanism. Their reaction is like this is a surprise and even signing intermediate cert as the first step and instead of talking about bypass or hack the whole PKI trust chain.

Based on some of the comments here, I think one has to understand that it is not just timestamp and validity. The checking of PKI is per transaction and on a continuous basis. It is NOT just based on signing but also based on CRL (cert. revocation list) which is also key.

I read the blog a few time. I feel frightened not enlightened. It seems they are not on the ball. A minor mistake (forget to renewal cert. like O2 (not sure but heard same issue)) gave a lot of lights on issues.

Do they have CPS even ... :-) or :-(((


After the last PR disaster - the Mr Robot tie-in - one of the ways Mozilla tried to make it right was that they promised that the survey system would never again be used for something that was not an experiment.

https://blog.mozilla.org/firefox/retrospective-looking-glass...

> A SHIELD study must be designed to answer a specific question.

Why have they abused it again here to deploy a hot fix, breaking their promise and policy that they put in place last time they messed up?

Or am I ignorant or some part of the story or technical details.


I think using the system they used to provide a hotfix for a browser-breaking issue does not clash with the spirit of their prior pledge.

I feel a complaint like this verges unhelpfully in to the pedantic.


I agree. And Mozilla somewhat addresses this in their second lessons learned:

> [...] we need a mechanism to be able to quickly push updates to our users even when — especially when — everything else is down. It was great that we are able to use the Studies system, but it was also an imperfect tool that we pressed into service, and that had some undesirable side effects. In particular, we know that many users have auto-updates enabled but would prefer not to participate in Studies and that’s a reasonable preference [...] but at the same time we need to be able to push updates to our users

Its a difficult situation to be in: Some users do not want any changes being applied automatically, but when something breaks, changes need to be applied. It sounds like the Firefox team is doing everything they can with respect to both ends of the spectrum.


Yeah I agree with you on this one. Especially because as another user posted earlier, they are going through and deleting all telemetry and usage statistics for their entire user base during the time period that this was needed to be enabled.

This seems like a very good compromise and is honestly more than they needed to do imo

https://blog.mozilla.org/blog/2019/05/09/what-we-do-when-thi...


As someone who opted out of Studies after the last abuse, I felt betrayed that my addons were, effectively, held hostage behind enabling both telemetry and Studies. I decided to wait.

It’s bad optics at the very least. Users who opted in for the update were in fact entered into studies they explicitly wouldn’t have wanted to be in without the lure of an earlier update.


I'm sorry that you felt like you were held hostage by Telemetry/Studies. With the exception of the hotfix, we disabled rolling out new Studies during the incident, and will not be re-enabling them until some time after Monday next week.

We are also completely deleting all Telemetry and Studies data received in the week following the incident to ensure we respect people who had concerns like yours, but enabled Studies in order to receive the hotfix.

Specific details and timestamps are in the post at https://blog.mozilla.org/blog/2019/05/09/what-we-do-when-thi...


That is excellent news!


"I'm sorry that you felt like you were..." is the worst form of apology, because it admits no guilt or responsibility. "I'm sorry that you were..." or "I'm sorry that we..." would be a legitimate apology.

That said, nuking this data is the first good thing Mozilla has done in this whole fiasco. It's a small but real act of contrition, so kudos for that.


> "I'm sorry that you felt like you were..." is the worst form of apology, because it admits no guilt or responsibility. "I'm sorry that you were..." or "I'm sorry that we..." would be a legitimate apology.

GP used the word "felt" and was expressing that he felt a certain way about enabling Studies. You're nit-picking a conversation and it has gone like this:

A: I felt that $x.

B: I'm sorry that you felt that $x.

C: "I'm sorry that you felt ..." is an insincere apology.

Yes, some people use this trick to get out of admitting guilt or responsibility but this is not an example of that.


It is an insincere apology though, because it apologizes for something "you" are doing and not something "I" am doing. An acceptable way to apologize to "I felt that..." is "I'm sorry I made you feel like...".

And that's the minimum. Anything less than that is shifting the blame.


From another Mozilla blog post about the incident response:

> "In order to respect our users’ potential intentions as much as possible, based on our current set up, we will be deleting all of our source Telemetry and Studies data for our entire user population collected between 2019-05-04T11:00:00Z and 2019-05-11T11:00:00Z."

https://blog.mozilla.org/blog/2019/05/09/what-we-do-when-thi...


Before reaching out to words like "abuse" for what is ultimately a hotfix, perhaps you should read the article. In particular:

> Second, we need a mechanism to be able to quickly push updates to our users even when — especially when — everything else is down. It was great that we are able to use the Studies system, but it was also an imperfect tool that we pressed into service, and that had some undesirable side effects. In particular, we know that many users have auto-updates enabled but would prefer not to participate in Studies and that’s a reasonable preference (true story: I had it off as well!) but at the same time we need to be able to push updates to our users; whatever the internal technical mechanisms, users should be able to opt-in to updates (including hot-fixes) but opt out of everything else.


I have read the article - and accusing users of not doing so is against the site guidelines here by the way.

Mozilla previously told us

> we have created a set of principles that we will always follow when shipping a SHIELD study to our users, and two principles are most relevant to this situation.

> A SHIELD study must be designed to answer a specific question.

What question did this hot fix answer? None. So what’s the point in the policy and promise if they’re doing to disregard it. It was supposed to be there to stop what went wrong last time. It’s like they disabled a safety put in after the last bad accident.


I think the reason was probably that the vast majority of Firefox users, if asked, would prefer that they did this. Not too many would say, "I am willing to participate in your study, but please don't fix my browser's add-ons if they all break". Sure, approval might not be 100%, but it would be something well north of 90%, and they probably took the right action.

Given, of course, that the truly right action of not letting the certificate expire in the first place, was no longer possible.


> the reason was probably that the vast majority of Firefox users, if asked, would prefer that they did this

I’m sure they thought the same about the Mr Robot addon, but their judgement was wrong so this policy was put in place.


Not sure if you want a browser that emphasis privacy and security, that you will also want to have remote code injections as a way to ship an update that affects the way add-ons security work.


The SHIELD policies they promulgated after that incident has to do with studies. This was not a study. This was an emergency band-aid to help as many users as possible get their browsers back to the way they (the users) chose to make them work by re-enabling their add-ons, using the fastest avenue available.

This was a user-friendly move, and while it's unfortunate that it was necessary in the first place, your criticism reads like a "gotcha."


> This was not a study.

I don’t get it. Distributing something that was not a study as a study was what upset people last time. So they promised not to do it again. And they have just done it again.


> Distributing something that was not a study as a study was what upset people last time.

People got upset because an add-on for a commercial TV series auto-magically installed itself in their browser without their manual intervention.

You can't just abstract that out into an ethics framework of, "If Firefox sends non-X over a channel reserved for X, then users get upset." I mean, you can, but you're going to be rightly confused and misunderstood.


The outrage over looking glass was because it was an unwanted advertisement/promotion being pushed as a 'study'

In this case, they were pushing a hotfix for a critical issue effecting all firefox users. It was only pushed through the studies system because it was the best option to get the fix to as many users as possible while they worked on a firefox update.

Implying that this is in any way equivalent to looking glass is completely disingenuous.


Perhaps people were upset last time because it was for commercial purposes, rather than as an emergency fix to help people to get their browser working.


Do they have a better channel for deploying hot-fixes? Maybe I'm a bit of a squish but I'm pretty agnostic of how they deploy fixes to me so long as my addons start working again.


They don't currently, but adding a better channel to deploy hotfixes is something that's specifically mentioned in this article...


It is indeed a huge violation of trust.

I’ve seen some folks trying to explain it away and compare the Normandy preference system to standard auto-updates, acting like this is no problem if you already trust them for auto-updates.

Flat wrong. This is a dark pattern by Mozilla pure & simple. It’s confusing, hard to disable fully, and clearly can be abused for non-experiment modifications to the user’s settings.


Mozilla has gone full "helicopter parent" mode, and will do whatever they can to keep you safe from yourself despite your wishes. Use GNU icecat instead. It has all that junk turned off.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: