Hacker News new | past | comments | ask | show | jobs | submit login
Adware vendors buy Chrome Extensions to send ad- and malware-filled updates (arstechnica.com)
160 points by uladzislau on Jan 18, 2014 | hide | past | favorite | 100 comments



I was thinking about this just the other day. The movement towards auto-update-everything is a massive security breech. Browser extensions are a prime example. So much of what we do is centered around the browser, and yet we allow essentially random pieces of code complete access to all that activity. Even worse, this code is allowed to download and run new versions without any user interaction. Needless to say, the potential for harm here cannot be understated.

Auto updating anything but the most critical and most trusted software is absolutely bone-headed and we're going to look back and wonder how we could be so stupid.


> Auto updating anything but the most critical and most trusted software is absolutely bone-headed

Yep. It's kind of mind-blowing that Crome defaults to auto-updating extensions, then lets just anyone upload new versions of them. (Is it even possible to disable this idiocy?) I take some perverse joy in the fact that someone has "monetized" the auto-update treadmill. Maybe it will encourage people to think about the consequences of installing random code to your computer whenever someone else decides to do so.


If a piece of software can connect to the internet, it can implement its own autoupdate. For extensions it would be trivial. People even commonly did it with Greasemonkey scripts.

The reason Chrome includes this functionality is because the hand-rolled solutions people come up with frequently have security problems.

Also, auto update gives good developers a way to fix their own security bugs when they are discovered.


We should be asking ourselves why we allow extensions with so much power and no meaningful security model. Auto update is a misfeature if I've ever seen one (again for anything but the most critical pieces of software).


Autoupdate is not a "feature", it's a consequence of Turing completeness combined with network access. Any platform that has these two properties has autoupdate, and there's no technical way to prevent it.

The autoupdate feature exposed to Chrome extensions is not there to make autoupdate possible, it's there to prevent developers from shooting themselves in the foot by implementing their own autoupdate insecurely. It's similar to a platform providing crypto libraries rather than letting developers implement crypto themselves.

The only way to try and prevent software from autoupdating is with manual review, which is what Apple does with iOS. However that has a whole other host of problems.

As for security model: I don't know how to measure "meaningful", but Chrome extensions have the most fine-grained security model of any extension platform. In fact, it is more fine-grained than most general purpose software platforms:

http://developer.chrome.com/extensions/permission_warnings.h...

The problem that started this thread is real, but it is not solved by disabling autoupdate (which is impossible) or adding more "security model" to the platform.

In fact the problem is very deep: How do you balance the desire to allow flexible software on your platform (adblocking, network stack interjection, manipulation of the user interface, etc), with the desire to limit the potential harm malicious actors can do? These two goals are in conflict. Existing best answers that humanity has come up with include some combination of "manual or automated review", "social signals" (stars, reviews from your network, etc), "blacklisting", "controlling access to the platform" (iOS apps can only be deployed in large numbers through the store), and "limit the power of the platform where possible with clever user interfaces" (the file upload button in HTML is a classic example).


I can't help but roll my eyes at these "turing completeness" arguments. The obvious way to prevent auto updating is to simply disallow an extension from modifying its own storage and to prevent it from running any code it manages to download. We don't operate on abstract turing machines, we can impose whatever limits we want on the code we allow to run. Now, I'm not saying this is easy to accomplish, but impossible it is not. If Google can manage to allow machine code downloaded over the internet to run securely like it claims, I'm sure it can handle a little javascript.


"The obvious way to prevent auto updating is to simply disallow an extension from modifying its own storage and to prevent it from running any code it manages to download."

You can remove eval() (In fact eval() is disallowed in Chrome extensions, for different reasons).

But how do you prevent an extension from including an interpreter for some other language and simply downloading and interpreting code in that language? This isn't crazy. It's common for games to include interpreters, for example.

Including an interpreter for a simple language is just one step in complexity beyond downloading configuration files. Is downloading configuration files that change the behavior of the code forbidden in your proposed system too? How is that accomplished?

"If Google can manage to allow machine code downloaded over the internet to run securely like it claims, I'm sure it can handle a little javascript."

NaCL has the same properties I'm describing. NaCL enforces a sandbox on what a hunk of code has access to; it doesn't enforce that that code won't change its behavior over time, possibly in response to data from the network. That is not possible to do.


What you say is certainly true in general, but it doesn't take into account the threat model of a supposed malicious extension. The threat model here is that someone decides to significantly alter the behavior of an extension after it gains traction and a certain level of trust. For this to work would require the extension to include an interpreter or some other mechanism from the start. They would essentially have to think "if this gains traction I may want to start silently injecting ads/stealing information, so why don't I include this interpreter in it now just in case". While possible, it isn't very likely.

The other problem with the security model is that all extensions are seemingly created equal. I would think that a large majority of extensions do not require access to any external resources besides the ones downloaded from the webpage itself. Think of a youtube downloader, flashblock, etc. These types of extensions should have a different security model that is constrained to restrict any web request calls besides to the domain of the page and perhaps any domains that page calls (to take into account cdns). These types of extensions shouldn't need any consideration of security implications to install. There's no reason why a youtube downloader should have the potential to steal my passwords down the line. More involved extensions that would require generic http requests could have the warning about data access.

I see in your profile that you know a thing or two about chrome extensions, so I will defer to your expertise on whats possible. It just seems like with the proper constraints for different levels of access one can have much greater security than we have now just treating all extensions the same.


If we didn't provide an autoupdate mechanism, then developers would just implement their own. Frequently, these handrolled systems would have security flaws. We have an existence proof of this: it happened in the Greasemonkey ecosystem.

We might propose that we could prevent these hand-rolled systems by restricting eval(). Well, we already restrict eval for different reasons. What we see is a lot of people working around the restriction by injecting code into websites (!!).

My bet is that basically any significantly sized extension would implement a workaround for any autoupdate restriction we tried to employ. Developers really like the ability to update their product, and the workarounds are not that hard. And these workarounds would be worse than the original problem - that sometimes good people turn bad.

It would also destroy the value that autoupdate provides which you are forgetting about: most of the time it is used by good people to do good things. We frequently find extensions with security problems, tell the author, and then they fix and push them to users. Without autoupdate this wouldn't be possible.

===

As for your proposal for how to restrict the levels of access extensions have... As I said above, we do this kind of thing already. You can read all about it here: http://developer.chrome.com/extensions/permission_warnings.h.... We have a very granular security system.

For example, it has been always been possible to write a youtube downloader Chrome extension that only has access to youtube.com.

Flashblock would theoretically be possible with the upcoming declarativeWebRequest API (http://developer.chrome.com/extensions/declarativeWebRequest...).

However designing the right APIs that have narrow risk, yet are flexible enough for developers to want to use remains a difficult problem that is unsolved except in specific cases.


Thanks for the info. I had no idea chrome had that, I don't think I recall installing an extension that DIDNT give me a warning about access. It seems like developers just stick to the more unrestricted system out of convenience or narcissism ("must phone home my youtube downloader, for science!"). There's definitely something out of whack here if developers won't use the restricted access when their extension fits nicely within that model. It would be nice if it could be shown that extensions are penalized in terms of downloads by requiring unnecessary access. As it is I've stopped using all extensions in chrome except adblock.


Can we start a petition for Google to let us disable extensions on specific sites? After reading the last few stories about this, I am quite sure I don't want any extensions whatsoever running in the same tab as my Gmail account. I think there is some extension that does this for you (turns off other extensions per site), but then we get into a "who guards the guardians" situation.

Not to mention we need better and finer grained permissions for extensions in general, now that we use so many web apps with crucial data.


I achieve this by having multiple users in Chrome. My main user is signed into my google account and has no extensions installed and all plugins disabled, I only access gmail and google services with it.

My second user account is logged into basic services like HN, reddit, amazon and has only adblock and disconnect installed.

A third user is not logged in anywhere and has adblock and a half-dozen other extensions installed including UA switcher. No plugins installed and cookies + cache cleared every day.

4th user has all the anonimity extensions installed and has privoxy + tor set as the proxy

I use Facebook in a completely different browser again, YouTube and video watching in yet another and development in chromium.

7 or 8 different cookie stores, and a throwaway temp email account associated with the 3rd and 4th user for signing up to services.

Start out by creating a separate user for browsing sites and eventually develop your own way to split up your web browsing profiles.

This can get a bit messy when you try and access from tablet or mobikle , but I'd rather not have a single large profile and a huge exploit surface and sacrifice browsing history and remembering passwords.


honestly this sounds like a pain in the ass


Start off with two profiles and go from there. I use virtual desktops (spaces on OS X) to manage it. Space 2 is gmail, space 3 is logged in sites, space 4 is development, etc.

Once you get used to it you instinctively switch without thinking about it

I also created an app for OS X that creates temporary throwaway browser sessions for any supported browser you have installed:

https://github.com/nikcub/tmpbrowser

Makes it easier than using command line options and creating users


What exactly are you trying to achieve with that, if I may ask?


What exactly are companies trying to achieve with very precise, self-healing tracking methods?


Serve you more relevant ads? I don't really know. Which companies do you mean anyway? And what are these self-healing tracking methods?


security and privacy - separate rings of trust for different websites. to me the idea of trusting every website on the internet with all of your cookies, extensions and plugins is crazy.


Look into running separate sessions of your browser(s). Both Firefox and Google Chrome (or Chromium) allow you to do this, although the interfaces for doing so differ.

A simple way to do this is to use different brands of browser, e.g. Gmail in Chrome and everything else in Firefox. But... if you really prefer one browser over another, for all use, then the separate profiles thing works.

Note that in Chrome, this is now confused by the ability to change Google account log-ins. That is not a separate, browser-level profile with separate configuration.

Instead, you are looking for the command line invocation argument --user-data-dir (in *NIX, at least; IIRC the flag name may differ slightly in the Windows version).

For Firefox, there is the -p flag. IIRC, you have to combine it with another flag in order to ensure both that the profiles are running in separate invocations and that you can be prompted to choose what profile to use when you invoke Firefox.

Of course, you can create menu items / icons for these invocations to make them "clicky" and avoid having to go to the command line and enter them each time, if you prefer.

P.S. Yes, this will help you less if you insist upon clicking directly on/through links that are are e.g. mailed to you or, if you have Facebook in its own "box", posted on Facebook.

From that perspective, having per site browser extension variability might still be useful. But then, you're still looking at also controlling referer passing, cookies and other local data, etc., etc.


To run a new, separated Firefox with a (possibly) different profile

    firefox -no-remote -ProfileManager
It's always handy to have a "vanilla" profile, to compare how much the extensions tuned down the browser or try to understand if the error that you're seeing is caused by an extension. Having a "privacy" profile with some ad-hoc extensions helps too.


Mind you, -ProfileManager actually opens to the full profile manager interface (where you select a profile to run, or create a new one, or whatever). You can load a specific profile (that already exists) directly by replacing "-ProfileManager" with "-P [profile name]". (Omitting the name will open the manager, too.)

https://developer.mozilla.org/en-US/docs/Mozilla/Command_Lin...


Ah, I guess that's why I remembered -p -- or -P, as the case may be.


Thanks. Sorry I mis-remembered the flag(s) from memory. And be sure you're using both, to make sure the separate profiles do not share the same process or something like that (again, from memory; Google can quickly turn up the details).


Your memory is right. Without -no-remote you would end up spawning another windows from the currently running firefox. Without -ProfileManager you can't choose a different profile.

It's also a good idea to use different themes per profile (and I see you suggested it too).


I tried running a Chromium session with no extensions for the stuff I want to be more secure (email/banking/etc.), while using Chrome for everything else, and this does work in a way. But I find that I tend to forget to switch to the other browser sometimes. A solution which said "don't run extensions on mail.google.com and online.my_bank.com" would be a lot more convenient.

The separate browser solution does protect from tracking, but with the security threats of today, like these malicious extensions with access to all your data, I've become desensitized to mere tracking.


I changed the color scheme for one profile (although, that involved installing and trusting the color scheme; I got mine directly from Google's site as opposed to a third party site).

An extra cue, when the border background, tabs, etc. look different in one versus the other. Still hardly foolproof...


Opera 12 has ability to disable access of specific extensions to https sites and/or private tabs (by default access to https sites is enabled and to private tabs is disabled). May be there is a hope that they implement it in Blink based Opera, but now they only have ability to disable access of specific extensions to private windows and have no private tabs at all.


Opera 12 is ancient history. Opera 18 is now basically Chrome without the ability to set a custom search engine as default. Even with sqlite hacks, there's no way to set DDG as default on the stable release versions.

(I use FF.)


Opera 12 is the latest version on Linux.


That's a good idea but only a crutch. I don't want extensions invading my privacy on any site. Finer grained permissions would be ideal.


Once an extension can modify the DOM (and most extensions need it) you loose any hope of permissions. From injecting javascript to sending data modifying an img[src], there's no way to protect your privacy. I don't think that permissions are a viable model here, it's more a problem of trust and auditing.


extensions have the option of working only on a set of domain. So you could only install gmail extensions that work only on gmail.com and not on * as 99.999% of the extensions does. Most need to, like referrer blockers and user agent spoofers. But we only need those global extensions because google actively removes those functionalities from chromium, on a regular basis, after someone in the community adds it. over and over again.


Chrome doesn't run extensions by default in incognito mode.

> Because Google Chrome does not control how extensions handle your personal data, all extensions have been disabled for incognito windows. You can reenable them individually in the extensions manager.

Keeping your gmail tab in an incognito window might be a good approach.


Or just using thunderbird. I can't be the only one who doesn't like gmail's wonky interface.


If I didn't use gmail through the web, I probably would use a different provider.


While it's a decent workaround, I wouldn't call it ideal, namely because it requires a second window open when you might be starved for screen real estate and also because accidentally opening your mail once in a normal window could cause damage.


The Ghost Incognito extension allows you to force certain sites to always use Incognito mode. I realize using an extension to accomplish this objective is somewhat ironic, but seems like it'd work in this case if you can trust that one extension.

https://chrome.google.com/webstore/detail/ghost-incognito/ge...


> petition for Google

Don't you mean a PR on the Chromium project?


From the comments:

  It sounds like Google needs to flag ownership changes and NOTIFY USERS about them 
  before the next auto-update of that extension.
  
  ------------
  NoteBuddy has been transferred from Joe Garage to Russian Mafia LLC. 
  Do you want to keep this extension enabled? 
  [ Da ] [ Nyet ]
  ----------- 
  
  - DaveSimmons
While this may seem like the most convenient step, it's actually not that simple either. The majority of users don't want to be notified of too many things and more notices may just be ignored. This is especially true of users with many extensions. As an alternative, it's possible to put extensions into a probationary period (I don't know if they already do this) when it's first created or the ownership has changed.

Extensions are reaching the wild-west of Play and no one's happy about that except malware/adware authors. If Google continues to serve more free users than their quality capacity (already a problem with every other service they offer), it won't be long before people move on.

And they will move on. Contrary to what most people believe, no one rules one domain forever. Whether it's the big iron of IBM, telecom of Bell, OS of Microsoft or the services of Google. Someone else will eventually wrestle in on your domain.


This is a disturbing situation, but it's hard to say what the best way of dealing with it is. The first thing most people reach for first is that extensions shouldn't auto-update. Personally, I disagree - I love silent auto-updates in general. It's a huge drag on the computing experience to have dozens of different widgets all requiring manual updates, all with different mechanisms and all on their own schedules. If I had to pick the one most annoying thing about running Windows, that would be it. The biggest appeal to me of Chromebooks is silent auto-update of everything. Don't bug me about updates, just make everything always run the latest, most feature-filled and bug-fixed versions.

I'd lean towards using a finer-grained permission model for extensions, and javascript in general. As far as I know, most extensions now can only request permissions to run arbitrary JS on every page you visit, and send arbitrary AJAX to any URL. Perhaps we could do a better job of locking that down. Have no such thing as a permission to AJAX any URL, but instead make AJAXing any particular TLD a new permission. Maybe we can also only allow access to DOM elements originating from particular domains. Then you could keep auto-update, but require a new user authorization for any new URL/domain access that an extension needs. It'd probably get pretty complex, but I think we could put together something reasonable. But then the other problem is the pile of plugins out there already with those arbitrary permissions. Maybe you could let them stay, but not allow any updates without switching to the new permission model?


Rather than only doing it via technical means like permissions, I'd be more comfortable with auto-updating but with some kind of human quality assurance. Two systems that manage to pull that off from very different cultural/economic starting points are Apple's app store, and Debian's software repository.

The Google model of an auto-updating but un-QA'd app store doesn't work for me, because it combines two things I really don't see as compatible: 1) low-friction updates; and 2) installation of arbitrary un-reviewed code from the internet. If you're going to do #2, then I want the friction of downloading a new executable. I want to go to a website, see if the company still exists, read the release notes, generally be cautious about installation of random executables off the internet. But if you're going to do #1, then since the updates are supposed to apply without significant review by me, someone else has to be vetting what goes into the repository for at least minimal non-evilness standards.


This is what happens with AMO-hosted Firefox extensions. Whenever the extension author pushes out an update, it first has to go through review as described at https://addons.mozilla.org/en-US/developers/docs/policies/re...


> Rather than only doing it via technical means like permissions, I'd be more comfortable with auto-updating but with some kind of human quality assurance.

I agree on this.

The best solution for now would be a meta-extension that checks if you have compromised extensions installed and disable them.

The blacklist could be compiled based on the Store feedbacks (ratings dropping sharply? disabled.), a reporting system from the app, and also using automatic testing. For example run the extension on a sandboxed machine and check for requests to known shady domains.


I don't understand the appeal of updating software at all. Once a piece of code becomes feature complete for my purposes I don't ever want it to change. Change just means more things break, features disappear, or the app becomes bloated, and now apparently, may start injecting ads.

I uninstalled noscript years ago when it started updating twice a week with apparently miniscule changes just to pop up its ad filled landing page. Its a fucking javascript blacklist/whitelist app, why on earth does it need to update? If I wanted a "browser security suite" (what noscript currently bills itself as) I'd download that specifically.


Every piece of software I use is broken in some way. Auto updates allow me to hope that they are a little less broken each day.


My philosophy is it's not broken if I don't notice it.


My point was that I notice broken things roughly every day.


The problem is with users. They ignore the scary "access to all websites" dialogs and install extensions.

Sites like pinterest have extensions that request these permissions, when they don't even need them. There's a way to have the extension only have access to the site you're on when you CLICK somewhere in your toolbar.

There are fine grained permissions and optional permissions to specific hosts and ports and URL patterns. It's all very well thought out.

The problem is with users.

For example, I have an extension that enables autocomplete on all web pages: https://chrome.google.com/webstore/detail/autocomplete-on/gd..., and it has access to all data on all websites. People install it every day, I don't know why. I would never install or trust anybody else with such an extension. I have to manually clone any extension I like and upload it to the chrome web store; that way I know it's not going to auto update and do nefarious things.


Depending on how the extension developer feels future development may pan out, requesting access to all websites is the only reasonable way to do it.

With how the update system works[1], when requesting new permissions the extension is disabled until manually reenabled. If there's even a slight possibility you may want to request access to additional sites in the future, you basically have to request "all websites" to prevent this from happening.

Since Chrome 16 there have been optional permissions around (so you only request permissions when they're needed, preventing the extension from auto-disabling), although that introduces additional overhead beyond simply requesting everything in the manifest file.

The extension update system should probably work more along the lines of that in Android - it auto-updates if the new version needs no additional permissions, but requires user input when new permissions are required. The current[1] state of auto-disable-on-update isn't ideal from either a developer or user position.

[1] as of about a year ago when I last tested this


You are right that adding new permissions will disable the extension until it is re-enabled. But android has a very similar behavior.

Still, I feel that using optional permissions and pointing out to the user why they would want enable the new permissions is the best option. Yes, it's more work for the developer.

I just released an extension to help identify high risk installed extensions: https://chrome.google.com/webstore/detail/privacy-guard/edca...


Looks like what we really need to do is to remove those universal permissions entirely, then. Say after some particular date, no new extensions or updates to existing ones are allowed unless they remove the universal permissions and switch to the site-specific ones.

Of course, then you need a solid extension update system where there's some way to alert the user of when an extension needs new permissions, and have them see and authorize them before the update goes active. Hopefully something like the way that Chome updates work now - instead of throwing a modal dialog at the user at some random time, have a tick or light in the UI somewhere that they need to do something.


Minor nitpick: you're romanticizing what the autoupdate policy achieves, it does not give you the latest, most feature filled and bug fixed versions of things, just the latest.


I had similar problem few days ago ...

I'm using "Super Awesome New Tab Page" for Chrome, and since few days ago random ads started popping on youtube, ebay, dx.com, amazon etc ... Took me ~30 mins to figure out which extension it was, and removed it... They also injected <script> tags to ALL websites I visited (that loaded external JS), tracking my history and they could easily put any form/input logging and silently insert keylogger into my Chrome if they wanted. I have reported the extension then, but nothing happened yet.

Link: https://chrome.google.com/webstore/detail/awesome-new-tab-pa...


That shouldn't come as a surprise to anyone who has an app on any app store. I get at least two emails per week from some shady advertiser that wants to place ads, add notifications, gather user data or some else equally awful.

I had at least one app where I didn't pay close enough attention to the permissions it required upfront and at some point it started injecting ads into random pages (it was a stopwatch, by the way). I guess the only way to effectively counter this is a better permission system.


While the app developers may not be surprised, I strongly suspect the overwhelming majority of app users are.

And for Google to allow this to happen is unconscionable.


I was contacted and offered some payments to start including some sort of ads in my Chrome extension. As I understood it, it would inject ads and/or replace links on web pages with affiliate links.

I declined their offer.


That's just creepy. Replacing page content, unless ad/spam blocking or specifically user-selected filtering, is completely unethical and totally falls under malware in my book.

Good on you for turning that down. You've saved your users a lot of aggravation.


If you think about it: as formulated, virtually all present Web advertising works via content injection. You're just (nominally) limiting who gets to insert content.


First, Chrome needs to alert the user before updating an addon that has changed owners, and the default should be "do not update". This change should be pretty uncontroversial.

Second, Google needs to modify their policy towards Chrome addons to disallow ad injections and all user tracking.

There should be a clear demarcation between addons and "apps" in the Chrome store. Apps can be legitimately supported by ads. Addons shouldn't be.


Going through the few extensions that got caught doing this, I've noticed a commonality: Ecosia. This 'charity' is paying app developers to install adware.


I think Google are already taking steps to address this.

* http://blog.chromium.org/2013/12/keeping-chrome-extensions-s...


It's not just a DOM change problem. What if an extension change owner and start sending pairs (domain/user's credentials) to someone else? What if one buy an extension to get bank sites accesses?


What's surprising is that extensions are by default not "Allowed in incognito", however they are allowed when using HTTPS and there is no option to disable them there.


if adblock didn't work on https that'd be one way to see it massively deployed ;)


HTTPS shouldn't be thought as just for secure sites, but just as the default. 4chan and this site run on HTTPS yet I wouldn't want to disable extensions for them because of that.


Opera seems to allow this (and it seems to give a bit more fine grained control over when/where extensions are allowed).


I would like it if the browser's DOM view would highlight changes made by an extension (at least directly; if the user injects JavaScript, it might be easier to just show that JavaScript rather than the changes that that code made to the DOM.) Also, it should log it (although this may create security concerns) so that one could occasionally look over what their extensions are doing.

Similarly, there should be a log for AJAX requests made by the extension directly.


I wouldn't be surprised if some of the instances of this happening was the result of the original extension writer trying to boost his bottom line. (And not the work of shady malware marketers).

It's easy to convince yourself you have been cheated when you know your extension has over 1M+ downloads and you only have ~!$100 sent to your donation button.


I forked an extension released under MIT license because of this -- original author pushing shady updates.

After I re-released it on the Chrome Store, now I get emails trying to buy "my" extension from me.


That sounds like a fine grey-hat tactic. Fork existing extensions, generate several thousand fake installs. Then sell to the first bidder. Rinse & Repeat and you'll make a profit and turn the market for second hand extensions into a market for lemons.

If it happened enough the buyers would start being more cautious. The bar for minimal installs will go up and more proof will be expected. Soon enough the buyer's intentions will be clear to any extension owner.


I wouldn't be so optimistic.

> The bar for minimal installs will go up and more proof will be expected.

Or the Store would be flooded with infected cloned extensions that some idiots will install anyway.

> Soon enough the buyer's intentions will be clear to any extension owner.

That's not the problem: developers could trivially be informed of this risk when they submit their extension on the Store.


I have the feeling that every "good" message about Chrome is followed by a disastrous one. This time it was the funky "audio feature" on tabs, followed by this. Chrome developed a scary creativity with good and evil stuff. I used Chrome once because everybody was telling me how fast it is. It wasn't. People just couldn't add all the processes properly. Since then it was a roller coaster I won't jump on anymore. I use Opera as an reference alternative to FF.

btw: my parents end up with Chrome every time I come by on the Windows. Installed through some update (Java maybe?). Is there any way to block this without taking all installation rights from them?


> my parents end up with Chrome every time I come by on the Windows. Installed through some update

My guess would be the tooltip-like thing that Google puts on its search page telling them to "make the web better with Chrome." I've never clicked on the thing, but it probably changes your default browser and installs a bunch of shortcuts.


I've been informed that Chrom Extension advertising malware is a highly efficient and key source of money laundering. The market size of this is only limited by Google Chrome users. This is just the tip of that iceberg.


I created an extension for Facebook that had 30k active users. I sold it after I was contacted like this. I was suspicious immediately from the outset about who the buyer was, but I figured I'd sell anyway as it took me about an hour to write. Strangely the app was deleted from the store within an hour of my purchase and has yet to re-appear: https://chrome.google.com/webstore/detail/facebook-chat-fix/...


How about a little 'Stranger Danger' prevention as well?

Don't run extensions from sources you don't trust in a browser profile you need to trust.

I have one profile for development and one for browsing. development browser can get any old extension but the normal browser gets extensions from known, trusted companies like Google and LastPass.


OT: Honey developers (a popular Chrome extension ~ 700k user) was approached by malware companies and are doing AMA on reddit

http://www.reddit.com/r/IAmA/comments/1vjj51/i_am_one_of_the...


The browser (or an extension if possible) should include a list of extensions sorted by when they were last added/updated.

Beyond helping determine why you're now seeing extra ads, this could help debug why certain sites are no longer working.


It's surprising there are still people foolish enough to believe Google cared about privacy when their business model depends on monetizing people trading their life's intimate details for services.


Obligatory note: Another example of why we should only run Free Software and only from trusted sources. Users of Trisquel or gNewSense never have problems like this.


Obligatory response that not everyone has time or training to examine code for problems. Google is a trusted source.


>Google is a trusted source

Hardly, if this kind of thing is allowed in their marketplace.


You're missing the point. Google and Apple are trusted sources in the real world. In the real world, people say things like "I have an Android" and advertisements say "Get a free Android with contract" If the distinction there is lost, I don't think you can expect the general consumer to know that Google isn't a "trusted source" by the definition of FSF advocates, right? Nor, I would contest, that they need to.

>Hardly, if this kind of thing is allowed in their marketplace.

They'll move on this just like they started actively patrolling the Play Store for malware. Because they are a trusted source, even if they aren't an FSF definition of trusted source.


Obligatory response that you're not relying solely on your own ability to examine code for problems, but the entire universe of those able to do so.

With proprietary code, you're restricted to audits done by those whom the code author has allowed to do so (or who have surreptitiously obtained the source ... and having done so, put themselves at legal risk by disclosing their findings).


All Chrome Extensions are completely opensource (lowercase - in the sense you can unzip them and inspect the source).

The evidence seems to be that this does actually work - people do look, and find out suspicious looking issues. However, that is insufficient to protect users of extensions that were previously trustworthy and then become malicious.


I think real capital Open Source is what matters here. In that case, anyone can fork and redistribute the good parts of any plugin. In the case here, someone owns the code and reserves exclusive monopoly and that's what gets bought out and that's harder to revert or fork.


That is true, but the OP was referring to the ability to inspect code for security and/or privacy problems, so you can decide if you want to run it. Just getting the source code is adequate for that.


Indeed, but my OP he was replying to was that true Free Software matters, because the whole buy-out of the plugins wouldn't even happen like this then.


While on the subject of Chrome Extensions,could someone explain why? the React: Dev tools extension has Incognito mode.


Being the solution to the problems you create.


auto updating the extensions is not a good idea.


Why use any extensions ever?


Because extensions can provide useful functionality that's not provided by the browser. For instance, ad blocking or developer tools.


Chromium has all the dev tools I need OOTB. Adblock? Perhaps one risk I'll take on extensions. What else? Nothing. Unnecessary shit for people who want to configure every detail but can't be bothered to hack it themselves with uzbl or one of its equivalents.


Or a browser for that matter.


Hey Google, knock it the fuck off.


There is an easy solution:

Follow http://superuser.com/questions/290280/how-to-download-chrome... in order to download the crx file manually.

Then unzip it and vet it manually to be clean.

Copy it in a folder, enable extension developer mode in Chrome and install the local copy of the extension.

No autoupdate, everything's fine.

Unfortunately Google plans to disallow local extensions, which is a major disaster and very evil: http://thenextweb.com/google/2013/11/07/google-block-local-c...

What happened to that?


Easy solution to any chrome extension possibly getting sold to spam ads is to disable automatic updates and manually inspect and install all updates for all of my extensions?


I mistakenly installed a Minecraft modloader for my son without checking it out first. It silently installed a couple of local Chrome extensions that injected ads in every page. It would reinstall them (again, silently) every time you deleted them. It wasn't detected by Microsoft Defender or Avast until I ran Malwarebytes which took care of the problem.

So, pardon my french, but no freaking way do I want any local Chrome extensions allowed by default anymore.

For extensions from the Chrome store, perhaps Chrome should make updates more like on Android, where you are notified and can click for more info.


If you are running arbitrary code on your box, then local installed Chrome extensions aren't the real problem now, are they?

That code could, I dunno, run a local HTTP/S proxy (install a trusted cert) and MiTM your HTTP requests and inject ads that way. Or about a million other things.

And it's funny Chrome is trying to prevent apps from doing that, when they themselves do the same thing: In Windows, pinning to the taskbar is supposed to be user-only. But Chrome circumvents that and pins anyways, actively avoiding user preference. (And they drop an icon on the desktop, without asking.)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: