I don't agree with Cory Doctorow on everything, but he was absolutely right back in 2012 that platform owners would converge on the idea that the general public is not to have access to general purpose computers. These pushes for locked-down phones, key escrow, neutered browser extensions, mandatory code signing, remote attestation, and so on are all steps toward this convergence.
Yes, yes, there's an argument to be made about protecting devices against owner tampering keeps users safe. But throughout history, haven't most measures against the public been justified by safety concerns? And haven't we been better off, overall, discarding these measures in favor of personal freedom and responsibility?
The most surprising development on this front has been Firefox's limiting user's ability to install addons that have not been blessed by them. I understand that if you run an app store, you get to review code and refuse to distribute apps/addons that don't meet your standards.
The surprising thing is that Firefox does not let you install a non-blessed addon, even if you are downloading it from the developer's website (i.e., not from Mozilla). They don't give you a warning about installing software from un-vetted sources — they straight up don't let you install. I understand this is the case even if you're running the developer version of Firefox, which is obviously not meant for mainstream users.
AFAIK, the only way to install a Firefox addon that has not been blessed is to get your hands on the XPI and install it in debug mode, which means that it automatically uninstalls itself every time you restart your browser.
I would have expected MS Edge, Google Chrome, and Apple Safari to have done something like this before Mozilla Firefox. What happened to the open web?
It is possible to load unvetted extensions, but it's inconvenient. If you go to this url in Firefox:
about:debugging#/runtime/this-firefox
You can install a "Temporary Add-On", which is any add-on/extension you have stored on your local disk[1]. So you can download an extension from a third party, rename the .xpi to a .zip (Firefox extensions are just zip files with a different extension), unzip it, and use the "Load Temporary Add-on" button to point to the manifest.json file in the unzipped extension.
Add-ons installed this way will work as though you installed it via normal means, with the caveat that it will not automatically reload when you restart Firefox, it must be added manually again.
I have developed a few extensions, and this is the standard process for testing extensions during development. There is a similar process in Chrome, etc.
Is there a way to permanently side-load? In my experience you can only do it via the debug functionality, which uninstalls with every restart. I believe you can side-load approved addons permanently, but I've not seen this for non-approved addons.
When people dislike a source they will often dismiss the entirety of the content. If you qualify it like this you can get these people to be more open to the information because they identify with you as a member of the tribe who dislike the source, and the people who are already partial to the source don't care.
Imagine, for example, Breitbart gets exclusive information regarding something unrelated to their politics. It's very easy for a large number of people to reflexively dismiss the article because of the publisher, but if you qualify it by saying that you're left leaning, then people take a second look.
Learning how to communicate effectively is not anti-intellectual.
I could write a genius paper with bad grammar and poor spelling, and it wouldn't detract at all from my intellectual arguments. However, very few people look at basic editing, spellchecking, and word choices and say that they're anti-intellectual -- nobody claims that by caring about spelling I'm training my audience to be dismissive or shallow when evaluating papers.
Back when I was writing college essays, I would go even farther; I would tailor my grammar to specific people. I knew that certain forums disliked second-person voice or sentence fragments. Other forums leaned in the opposite direction, and I would take a more casual approach.
If you know that your audience has a certain expectation, bias, or ingrained belief, it is good to address that from the start. It's not encouraging laziness or reinforcing a bad habit, it's just making sure that the people you communicate with are on the same page as you. Worry less about what your responsibilities are as a communicator, and worry more about just getting your ideas across -- regardless of whether that entails couching your language or adding disclaimers.
There are very few disclaimers, language choices, or style alterations that I won't use if I'm trying to communicate an intellectual point with someone I know, because I care about them understanding me. Literally everything else is secondary.
Is it? My (perhaps incorrect) understanding of your argument is that your audience should already understand that agreeing in part is not agreeing in full, and that more to the point, if they don't already know that they might not be worth communicating with in the first place.
My argument is that it's a good idea to worry less about what people should know, and more about working around what they do know.
Sometimes good communication means accommodating people's biases and misconceptions about how the world works, and addressing those misconceptions up-front.
> What you're describing is basically code switching.
Sure. And what quotemstr is doing isn't code switching? What is the difference between dismissing an argument because of the source and dismissing it because it was spelled wrong? Why should I accommodate one behavior and not the other?
They are, but it's a habit gained from forums with a broader audience, like Reddit, where a single mistake or unacceptable association can have your voice removed from the discussion.
People think it boosts the signal of their post to make their agreement seem exceptional. Simple as. It's such an internet comment meme that it probably even works. Just tack it on to any rhetoric for free points.
Google recently removed several hundred extensions which were found to contain malware. Tons of extensions have been caught saving & selling users' browsing history, notably the ironically titled popular "web of trust".
It's sadly not uncommon for malware hosts to acquire extensions and then use them to inject ads to spread their malware.
The Epic Privacy Browser presciently blocked almost all extensions citing those vulnerabilities and its desire to provide a reliably high level of privacy. It's kept its users safe while Chrome, Brave and other browsers' users were vulnerable.
I'm continually surprised that we don't have a clear delineation between extensions that "do" something—maybe relying on well-known third-party SaaS APIs in the process—and extensions that exfiltrate your browsing data.
99% of extensions don't need to send the URL anywhere without the user clicking to activate them; nor do they need to send the whole content of the page to a server, ever.
You know how macOS now has a capability-model for apps, where it sometimes says "X wants Y, and it's been denied it; you can go into System Preferences to manually give it Y"? That tends to neatly fix the "99% of apps don't need Y" problem, by increasing the friction to getting Y to the point that users won't generally bother unless the whole point of the app is Y.
I'd love to see that implemented for browser extensions—or, in fact, for individual tuples of {extension, trigger, data, third-party origin}.
In the sane case, that'd look something like: {Instapaper, on click of the extension's button, current tab's URL, instapaper.com}
It'd be pretty clear, just from looking at one of those entries, whether it's something that contributes to the functionality of the extension or not. Extension authors would undoubtedly write "installation instructions" urging users to enable these; but if the instructions say to enable a stanza like {MalwareExt, on page load, sends everything it can, to badserver.info}... well, even the most guileless user would think twice, no?
The issue is typically that the browser itself doesn't cleanly split those roles.
Take something like Vimium - it's an extension that lets you use keyboard shortcuts to navigate webpages. You could "scope" it's interactions to adding keyboard shortcuts and modifying the page to indicate links: no external calls, etc.
But it could still add a script to the dom (and it would execute) or it could open a hidden iframe and do a bunch of wild stuff with that.
> But it could still add a script to the dom (and it would execute) or it could open a hidden iframe and do a bunch of wild stuff with that.
Maybe right now this stuff is a free-for-all, but shouldn't these scripts + iframes be executing, in some sense, under the CORS origin of the extension? (Not quite CORS in the traditional sense—you do want e.g. Greasemonkey to be able to mess with pages on all domains, even ones that don't want it to—but you could re-use the CORS origin-tracking, and just relax the rules slightly where extension:tab interactions are concerned, while still having rules.)
How would you implement navigating within the page without executing in the context of the page? For example, jump to the next form field requires finding the form fields.
By having chrome build explicit support for this kind of stuff, and creating an API around it.
It seems plausible for Google to inspect its top 500 extensions, and figure out a set of APIs that support all of those needs without giving unlimited read/write/network/execution access to the extension. Just look at how Safari created their own Adblock API thats faster/more memory efficient without giving away URLs to the part of the extension that has network access. This model should be expanded upon.
I'm being serious. It's been a long time since I've developed a Chrome extension so I don't follow them, but it seems to be me that is a good start but there are many areas where Chrome (and other browsers) could go even more fine-grained.
I'm not sure why it's unpopular here (perhaps its implementation vs concept?), but it seems to me that with the realities of malware constantly being distributed through extensions in addition to the obvious privacy issues, that many reasonable people would wish to see this evolution.
Personally I use almost zero browser extensions because of these issues.
So if that’s true, then they’re not doing as I suggested: take a fine-grained approach to the top 500 extensions (including adblocking) to make it possible to create them without having full read/write dom + networking. I believe the content blocking APIs in Safari are a great start and could be taken so much further.
Not at all! Webpages can still do as they like. That’s general purpose!
We’re talking about extensions distributed on a store that often end up with malware. I’m not even necessarily advocating for them to remove the ability to anything they wish (yet)... but let the browser catch up to already do what these extensions want in a more secure way. What’s wrong with that?
> Webpages can still do as they like. That’s general purpose!
That's general-purpose for the third party. Not for the user! This is the whole thing the "war on general-purpose computing" is about - whether the software serves the user, or whether it serves its creator and third parties it trusts.
> let the browser catch up to already do what these extensions want in a more secure way. What’s wrong with that?
There's nothing wrong with that per se. I have the problem with the part involving removing user's ability to arbitrarily alter the behavior of a website.
Or make that "reviewable back-end with an option to host it with a trusted party". I also don't mind paying for a review if that improves the quality of the review (for when it is appropriate of course)
Sadly they do because security holes or bugs can be found in extensions, and those can affect third-party websites. Project Zero has found many bugs of that sort and getting them patched on users' machines is very important.
If the extensions' permissions are extremely limited the risk profile becomes low, but that makes them far less useful.
Ordinary Joe is never going to wrap their heads around concepts like permissions or site-level isolation or sandboxing or may not even understand something like extensions or plugins. Any model of software that presents choices to the average user that depends on understanding such concepts is likely to lead to the user clicking "yes" or "allow" without even a rudimentary clue of what they're allowing. Software engineers need to basically bake in the functionality of a portion of the most used extensions.
I disagree, the problem is how those permissions are labeled and presented. Why does my flashlight app need to use my camera? Turns out you need camera permissions to light the flash. Why does this game need to read my photos and files to work? Turns out it needs to write some data to the external storage.
These permissions should never have been lumped together. At this point, the browser's "this extension needs access to all sites" is a joke, as every extension requests such a laughably broad permission.
But very often it's sloppiness. On Android you can write data just fine without permissions, just not everywhere.
When I encounter an Android App which needs file permissions for no apparent reason I assume malicious intent.
Edited to add: I wrote an Android App which has data import and export. You don't need file permissions for both, at least not on newer Android versions.
It also shouldn't be too difficult to restrict mobile apps to specific folders - so those that need to actually access all photos have to ask so explicitly, while others just ask to store files...
Recent versions of Android do have a picker that lets you choose a folder for the app to have access to, but everyone just uses the scattergun "all of it" permission, I think.
This article could also be titled 'The Case for Limiting cautious about software you execute' and with a few tweaks in text could support the below sentence.
"If you go to a bad website, it might cause bad things" is ~=~ "If you use a bad extension, it might cause bad things"
I think that the advice to be wary of extensions adding permissions is quite astute and a good reminder to each of us to make sure we are using well-vetted (to our own satisfaction) software, in-browser and out.
But I'd posit on the whole that browser extensions do much more good than harm. uBlock Origin has stopped many a grandma from clicking a false download button-in-banner-ad.
Add to that the heavyweight tools of NoScript, uMatrix, Privacy Possum/Badger, absolute enable right click, SingleFile, Decentraleyes, and more that I've yet to learn about, and they greatly outweigh the drawbacks of the likes of extensions mentioned in the article and 'web of trust' etc. that have gone to pot.
I react so strongly to this not because I disagree with vetting software, but because I don't want browsers having yet another excuse to yank control and features away from me as a user. I'm already nervous that Mozilla will coyly refuse to support all extensions in their new browser, replace Fennec with a worse system and I will be stuck with it.
Chrome on Android supports no extensions, and is tightening the screws on extensions in desktop, so this is not a slippery slope I'd like to get on.
This is an oversimplification of the problem. Malicious websites run under a different security model than extensions.
Not to say I am for restricting extensions more, but just to vouch for the idea that there is, in fact, a valid and novel point to treating extensions cautiously versus other kinds of software. Even programs running locally and unsandboxed will have to work quite a bit to compromise a browser.
Currently extensions either request access for a hardcoded list of websites, or it can request access for all websites. I really want the ability to install an extension for a particular website of _my_ choosing.
You can do this in Chrome. Just go to Manage Extension and then change the radio button selected from "On all sites" to "On specific sites". Then enter the sites you want to be affected, using wildcards if you want.
I recently discovered this because a few of my extension's users wanted us to create a whitelist feature. We had previously built a blacklist feature and were dreading creating a whitelist also, and figuring out how to educate users about what these two features are, and how to use them. Then I discovered that Chrome (and Brave) offer this whitelist feature on all extensions. Problem solved!
Sadly (and surprisingly), Firefox appears not to offer this feature.
I would really want the ability to disable network access for a select extension. So an extension can look at the DOM all it wants or read the webpage but can never communicate to the outside world or access the local file system outside of its directory.
> Currently extensions either request access for a hardcoded list of websites, or it can request access for all websites. I really want the ability to install an extension for a particular website of _my_ choosing.
Extension can list this permission, and request access to specific website on demand.
Chrome has supported this for a long time, but I'm not sure if Firefox supports this permission or not.
Seems like a culture problem created by improper management by Mozilla and Google.
After all when I run apt-get update any of the updates could install malware that could do anything at all.
Instead of programmatically limiting what extensions can do, which seems very difficult to do while preserving useful functionality, they should study what makes the debian packaging system so trustworthy and implement that.
Debian package maintainers who allow malware to slip through are likely to be forced out of their post by the community backlash. Similar accountability for Mozilla/Google addon reviewers isn't there. The addon gets taken off but what is the accountability from the employee who allowed it to pass review? A company apology is fine from a PR point of view but it also means that employees will not take their job as seriously as a real demotion or consequence.
How much of a 10$ wrench attack / rubber hose attack / bribery do you think it takes to target a Debian packager? This seems like a pretty weak defense.
Debian simply is less juicier to attack than consumer focused stores. Debian has much more technically aware and vigilant user base. It's also probably being used somewhere where there's constant monitoring for security and breaches etc. Not to mention that Debian can move really slow. Default repos have nginx at 1.10 (when 1.16 is the latest), node is stuck at v4, postgres is at 9.6 (when v10,v11,v12 exist)
Why would you attack a store where you can be burned in less than a week when you can attack stores where there are millions of less technically users who are more easier to fool and exploit?
Data Handling Transparency? Great discussion in terms of how to "regulate" extensions. I feel that it shouldn't technically be that difficult to put in data handling transparency, basically to analyze the code fast and figure out that this extension sends x,y,z data a,b,c servers named/owned by 1,2,3. That would be a start. Google doesn't allow for code obfuscation, controls the code/calls that can communicate with servers and thus can limit it to be easily found and analyzed, and they host all the code so they can analyze each update/extension and provide an automated "data handling transparency" report for each extension (with comments/details filled in, that may be mandatory, from the extension owner).
It's just wild how blind we are to our privilege on this. To me, it's pretty clear that I should be SUPER circumspect about allowing software on my machine or in my browser. I need to trust the author, and figure if the additional complexity really pays for itself in additional utility.
I'm sure most HN readers are like this.
Our nontechnical friends just add extensions and other cruft willy-nilly with no thought to the implications or risks.
I periodically update my software. It’s worth mentioning that major version updates can frequently bring new unknown security bugs, too.
Also, I generally rely heavily on access controls, so that vulnerable software is only really accessible once strong authentication has been performed. Pretty much tls/ssh endpoints are the only thing exempt from that.
Time to stop calling them user agents and start calling them vendor agents. There is a conflict of interest in these vendors' control of web standards; it is in their interest to keep the barriers of entry into the user agent market very high, so that there can be no timely effective response when they overstep their bounds.
Chrome and Firefox both vet browser plugins on their platforms. I've found Firefox to be much more thorough in this regard, to a fault. In some ways they are even more difficult to deal with than the Apple App Store, in my experience. For example, they sometimes decide to re-review your add-on, even if you haven't submitted a new version in months. At least with Apple, you can (mostly) control when your software is and is not reviewed.
In the early days of mozilla.org and even well into Firefox's heyday, Mozilla was more akin to a co-op. It was a neutral ground where folks associated with different vendors and institutions participated from all corners. Students. Independent researchers. Engineers from Google. Engineers at HP. Engineers at Red Hat. Even folks from Opera. Some people were being paid to work on Firefox full- or part-time, but most of their paychecks were signed by those other companies, not Mozilla. People who were on Mozilla payroll were generally employed by the Foundation with an official job title that involved keeping mozilla.org infrastructure running.
Some people might superficially look at the Mozilla Corporation and assume that this all ended there, but they're wrong. When Mozilla Corp. got spun up and Google signed a deal for search engine royalties, employee numbers stayed in the very low hundreds for years, and the search deal was largely seen as similar to another donation of sorts—Firefox had already defaulted to Google before the deal, because it was the only thing that made sense for the developers and other people actually using Firefox.
Somewhere along the way, though, between Google pulling engineers off Firefox and launching Chrome, to the introduction and rapid adoption of the iPhone and mobile Safari, the powers that be at Mozilla decided to make a business play, which is more or less where the stillborn FirefoxOS came in. Whereas before Mozilla was a vendor-neutral org working on what amounted to a reference implementation of a modern user agent as its contribution to the digital public infrastructure[1], the company began pursuing a role as a Bay Area bit player, formed a business plan, and began competing for part of the pie, both on the desktop and mobile. It went on a hiring spree, shifted most things out of the mozilla.org governance model to live underneath the Corporation's org structure, othered contributors not employed @mozilla.com, and hired execs from places like Adobe—folks with with business experience to try and run the show and make it all work.
Of course, it didn't really work. It just destroyed the Mozilla that used to exist.
So now we have Mozilla today, which is still limping along in that space. It's got the same name, but it's really just running on steam and trading on that name and the goodwill that the Mozilla community fostered in the earlier era. But that doesn't mean when you see people talk about "Mozilla" that they're referring to the floundering business that we see today.
All it takes is the ability to revoke permissions without the extension knowing. That way users can't be conned into unlocking broad swathes sensitive data with the threat of not working. This works well for apps on LineageOS.
No, an extension that has good reason to inject a script into every webpage can, with the same permissions, exfiltrate user data under a new owner. Almost all extensions thus users are vulnerable to this. There is no higher priv the script needs to ask for.
A root cause for this is the incredibly unwise decision Google made to allow extensions to request access to All Websites (including https) without even a scary nag warning. The description for that permission kind of understates the danger and because it had no negative impact on installs many extensions requested it that didn't need it.
Worse still, the correct approach - specifically listing pages you need access to - has been broken for ages, such that if you add a new URL your extension will silently break until users go into a popout menu and turn it on manually. This discourages doing the right thing. If you want your extension to keep working you need to request wildcard access from the beginning, because Google's update flow for new permissions is broken by design.
Sadly Mozilla followed in their footsteps and the warning for that permission isn't much better. In my opinion any extension with that permission should be forced into a slower manual review cycle. Google does have the ability to do this - my extension was forced into a slow manual review cycle for a month after a fraudulent DMCA claim - but for some reason they opt not to do it. It would also be very beneficial for them to apply additional controls like requiring use of two-factor authentication and a code-signing key before uploading extensions with those permissions to reduce the risk of malicious code sneaking in. You can't completely safeguard it but manual review + aggressive authentication would help.
As-is, thousands of extensions used on a daily basis have the ability to steal people's gmails (including password reset emails), issue password reset requests, send emails, or manipulate your bitcoin wallet. A well-crafted extension could do this near-invisibly in a way the average user would not understand. Google has taken some steps to mitigate this, but it's taken them far too long and they haven't been aggressive enough.
Security-wise, web extensions are worse than desktop software has been in over a decade. UAC, sudo, the iOS app store - all dramatically more effective at protecting users. With web browsers being used daily by a billion+ people it's really unfortunate that this security blindspot has remained so long.
EDIT: It's kind of sobering to consider AddonJet's promised payout of $2500/day for 100k users. I maintained an extension with roughly 120k weekly actives across the globe for about 3 years and it mostly cost me money. Monetizing that would have covered medical expenses and rent for my family, plus potentially college for my nieces and nephews... oh well.
to me seems a very simple first step:
just require extention devs to notify people on owner change, possibly with a standard way...
just a lil popup or icon asking "extention X has been aquired by Y, it has permissions A, B, C, D, are you OK with this?"
Yes, all of it. The title is actually "The Case for Limiting Your Browser Extensions" and it suggest the user should limit them, not that the browser vendor should restrict what is allowed. The current HN title made me jump to that conclusion as well.
But the entire article is a demonstration that most users don't have the ability to do meaningful due diligence on their extensions, and even then they are helpless when the ownership changes hands to nefarious actors.
Frankly I'd be in favor of a piece of regulatory legislation requiring that all software which features an automatic update mechanism be required by law to inform their users of a change in ownership and obtain affirmative consent to begin receiving new versions from the new owner.
I'd be in favor of a piece of regulation that requires the extension declare its scope of behavior (e.g. "no analytics", or "no data exfiltration"), and then any automatic update that breaks this declaration means the developer is subject to fines and/or imprisonment.
Most of the extension problems boil down to either a) malicious developers, and b) developers of popular extensions which take money to include malicious functionality, or sell the extension to a third party.
Regulation of whom ? The extension vendor registered in Estonia ?
Of software vendors in general ? Expect some riders in that bill that you're not gonna like.
From whoever is responsible for various international agreements around pursuing crimes on the Internet. I'd like to see malware and adware criminalized.
But yeah, this isn't going to happen in the real world.
This seems like a good idea to me. Maybe have some kind of a clause that this notification has to be easily understood by the user so that it won't be hidden in a wall of text.
I'm not sure I agree. Krebs doesn't end this article saying that extension stores should lock down what they're offering. He ends it with a user-centric call to action: that users must do meaningful due diligence, that they should be highly conservative about installing extensions where they haven't vetted the authors, and that they should pay special attention to how extensions change over time.
I'm pretty biased in this area, so I lean towards a certain worldview and interpretation. But having said that, another way to look at this article is as a repudiation of centralized, vendor-controlled moderation in general. Moderation has utterly failed to remove malware from browser extensions. The only way to keep yourself safe is with personal diligence. You can't trust Google or Mozilla to do it for you.
When Krebbs of all people is saying that they're often too nervous to install extensions from these stores -- that means the store security is not working. That means that locking down devices/software has not made us safe.
He's making this case against one of the least gate-kept stores in the ecosystem and is writing this from the perspective of the power user, and it is obvious to him and you (and me for that matter ) that users due diligence is non-substitutable, but in a corporate vendor context whether of a browser or OS this effectively makes a case for an ios-style walled garden and ripping the inherent benefits of revenue and ecosystem control.
I won't dispute that a vendor could walk away with that conclusion. I will dispute that they'd be correct to do so.
I suspect that everything I'm about to say is just violently agreeing with you, and every point I'm about to make is already obvious both to you and to most of the people on HN.
But it's still worth saying -- if you can't protect advanced users, you can't protect normal users. A technical user is (usually) going to be safer about what they install than a normal user, they're going to recognize weird software behavior, they're going to react more quickly to threats, and be less prone to doing dumb things because a random website told them to.
Making power users like Krebbs feel safe is the minimum bar. Krebbs should be the easiest person in the world to secure, because they're already being careful. So if Krebb's isn't willing to install software, then the security model is utterly broken, not just for power users but for everyone.
I think there's some value in attacking app store monopolies from that angle -- pointing out that they're not just a threat to user freedom, they're also security theater. They don't work. Whether or not Krebbs actually got that point across effectively... :shrug:
I don't agree with Cory Doctorow on everything, but he was absolutely right back in 2012 that platform owners would converge on the idea that the general public is not to have access to general purpose computers. These pushes for locked-down phones, key escrow, neutered browser extensions, mandatory code signing, remote attestation, and so on are all steps toward this convergence.
Yes, yes, there's an argument to be made about protecting devices against owner tampering keeps users safe. But throughout history, haven't most measures against the public been justified by safety concerns? And haven't we been better off, overall, discarding these measures in favor of personal freedom and responsibility?