Hacker News new | past | comments | ask | show | jobs | submit login
Fake ad blockers in Chrome Web Store (palant.de)
156 points by mark_edward on April 19, 2018 | hide | past | favorite | 66 comments



This was bound to happen. As a Chrome extension developer, I would love to have a way to prove that an extension was created from a specific git commit hash with no modifications. As it is, even if you read over the extension's code on GitHub, there is no reason not to suspect that I added something malicious when I submitted to the app store.


This is the same problem that some package repositories like NPM have.

Everyone takes it for granted that the github link on an npmjs.com project page describes the code you get when you `npm install`.

Both npmjs.com and the Chrome app store should have an in-line file browser to make it as easy to vet the code as possible.

I have a browser plugin[0] that lets you view a Chrome plugin's source code, but I've also noticed how many of them obfuscate their code. For example, the Super Netflix plugin is heavily obfuscated: https://chrome.google.com/webstore/detail/super-netflix/aioe...

I can certainly imagine why. It would take me a non-trivial amount of time to figure out how to interact with Netflix's client/server. I gave up when I couldn't even figure out how to programmatically pause/unpause the client. And that's now the plugin's secret sauce, especially in a hostile app store where everyone is chomping at the bit to clone your plugin.

I don't know of any easy solutions here.

Even large scale manual review like Apple's approach has issues with clones.

[0]: https://chrome.google.com/webstore/detail/chrome-extension-s...


I'm pretty sure I ran across malicious Chrome plugins being mentioned before.

Eclipse and PyPy has a problem with it too[0][1].

Apple with their walled garden also isn't a panacea. They let a very glitchy and broken version of cuphead onto the app store (so much for manual review and flawless experience) and there apparently is a known repeated offender whose MO is to port popular indie desktop games (maybe because it's Unity3D games it's easy?) to iOS hastily and sell them[2]. They also only last year banned the fake anti-viruses (the "speed your phone up by clicking this red button in this free app and paying us $10 to remove 1337 north korean backdoors, viruses and unneeded programs!1" clickbait kind, the one that targets low tech people).

I also wonder how far you could maybe get with some really great game in a native executable for Windows on itchio or Game Jolt (or maybe even on Steam, they are really bad with catching broken, bad, etc. games and have a very hands off approach to everything on there) that was also packed with covert viruses/trojans, people download and run those very willy nilly on their computers, don't keep themselves up to date (thanks to Microsoft making updates obnoxious and doing crap like installing Candy Crush 325254th time, I know I removed one from my laptop like twice or thrice now) and I wonder if there is any security scanning on those sites to try detect viruses in uploaded files (I wanted to try with EICAR but it kept getting nuked by Windows Defender and I don't have the patience to try make it leave my EICAR file alone).

[0] - https://eclipse.org/org/press-release/20170814_security_bull...

[1] - https://news.ycombinator.com/item?id=15256121

[2] - https://www.polygon.com/2017/12/18/16790052/cuphead-fake-ios...

[3] - https://www.theverge.com/2017/9/15/16314034/apple-developer-...


Parent post meant PyPI not PyPy.


Yes, of course! I mistyped because the names are so close. I can't edit it anymore either. There's of course nothing specific to PyPy (or CPython or any other implementation) with being vulnerable to crap code in PyPI being pulled in by pip or something.

I also meant Chrome extensions of course, not plugins, but plugins are so rare (except the default Flash) and sidelined (even chrome://plguins no longer works) that I call extensions plugins very often.


> Both npmjs.com and the Chrome app store should have an in-line file browser to make it as easy to vet the code as possible.

A diff with an X or a check mark if it's overall different in any way would be awesome too.


Or just a way to, say, right-click on the extension after it's installed and see the hash. The you can compare.


This is exactly why ether contracts are so interesting. You can guarantee the code being executed is what you are looking at.


That's got nothing to do with ether. A simple hash of the code is just as effective (or ineffective) at proving the program is the same.

Unfortunately, in reality you quickly run into problems with different compiler versions, optimisation settings and other issues that make builds unreplicable. At least with chrome extensions you can look at the JS source. With ethereum, all you have to go on is the compiled bytecode.


Is there anything from a developer perspective that we could do now? I guess anything thats voluntary could be bypassed by a malicious party either way. Reproducibile builds published in a blockchain? Then a chrome plugin that only loads the blockchain builds?


Why would you need a blockchain for this? Any sort of signed reproducible build would work just fine.


Because these days everything needs a blockchain.


Legitimately, if you used ex. Ethereum to compile your code could actually prove that a given executable was compiled from given source. Otherwise the only way to verify a build is to build it yourself and check the hashes. Signing just verifies that the build came from a given person, not that they used the source code they say they did.

This was one of the interesting use cases I thought about when I dug into crypto.


The signature doesn't have to be from the developer; it could be a signature from google that marks the extension was compiled from/matches the specified source. If you're running chrome and installing extensions from the chrome web store you're trusting google already.

As evinced by the article, the web store isn't perfectly trustworthy but this kind of validation could be done automatically and I do trust their ability to automate.


If Google compiled the extensions that would work. Seems like it would take a lot of standardization to make that possible. But in principle that would certainly be better than the current situation.


>ex. Ethereum to compile your code...

but that's just a roundabout way of doing reproducible builds.


Reproducible builds you can cryptographically verify without doing the building yourself.


How would this work on a blockchain? The best I could come up with is "check that a bunch of other nodes built it and came up with the same result", or "use a trusted execution environment (SGX or TZ) to build it".


One expensive way that would work would be to write a compiler in Solidity. There might be better ways involving breaking up the code and splitting it up between compute resources like on Golem. By the standards of common blockchain cryptographic security I'm pretty skeptical of SGX and the like.


So if you could build a perfectly efficient compiler that did not create any additional overhead, if the normal compilation took about a second or two (or any longer) to run locally. Then you would run waaay over the gas limit. That also means such a thing would cost way more than 10 dollars to run.


Yeah, super expensive for sure. But in principle possible - I'm sure there are smarter ways to do it. I could also see some users willing to pay ~$1K or more. A big company committed to high-security, open source firmware for their routers, for example, could benefit a lot by being able to demonstrate to their customers that they use a given (ideally highly readable for auditing purposes) source code and that the updates they receive really use that source. I don't think there's a means to enable that sort of cryptographic verification currently without every interested end user rebuilding the source.


One place to look up all hashes, identities attached, transactions are in order, the API is the same for all hashes no matter the store or the tech, it can work offline, if the site is down or under attack you can still check, it avoids putting loads on the original site, it's more resilient to attacks.

Having one homogeneous decentralized way to read this data is neat. You could make entire package managing solution out of this. Even attach torrent magnet links for each lib in the chain, and make the whole store distributed.


A git repository or a torrent or any other distributed datastore would also accomplish this. A blockchain is a very bad, expensive distributed datastore that you would only use if you need the specific trust properties it provides.

It would be a fair point to say that if you don't trust any specific person or entity to verify that a given source compiles to the binary signed by the author that maybe some sort of blockchain would be useful. I think you'd get a lot more bang for your buck by using trusted authorities in something closer to a CA model.


The advantage of the blockchain is that we have well tested clients and api already, dealing with ID, logs and sync, with a huge number of nodes.

And you do need trust for distributing software.

You can't use git, it's hard to sync automatically. Torrent is a nice transport, not a db, and can be use to sync blockchains anyway, why make them exlusive ? And using any distributed db would exclude the field tested solution on millions of wallet that prove to work, and the api that is alreay well supported.


  Why would you need a blockchain for this?
Evil insiders.

There are two threat models:

1. Evil Foreign Spy infiltrates the Tor project and makes a backdoored, signed release.

Evil Foreign Government uses MITM or something similar to serve this backdoored version to one or two journalists they know aren't technical enough to inspect the source code themselves.

The journalists think "This will be safe, as it will have been code reviewed by people who know how to do that" but no such people ever saw the backdoored version.

Hence, you don't just need a signed build - you also need a globally-agreed list of builds, so the user (and the auto-update mechanism) can be sure everyone else in the world knows this release exists. If a build is ever repudiated, sound the alarm!

2. Evil Foreign Spy infiltrates the Tor project, and intentionally triggers the alarm in a way that doesn't lead back to them.

The alarm going off and the fact the infiltrator hasn't been found means people don't trust Tor, or they start ignoring the alarm rendering it useless.

Hence, you don't just need a globally-agreed list of builds - you need an indestructible, globally-agreed copy of each build's source code and who changed each line.

Now granted, a blockchain isn't the only way to achieve these things - indeed, it'd cost a fortune to put the entire Tor source code into the Bitcoin blockchain - but you need some similar mechanism.


If you have a malicious government with private keys MITM-ing your connection, how can you trust the list of builds? Couldn't they have faked consensus for that as well?


Because Moar Blockchain. Blockchain all the things!

... sorry had to post this. Quite tired of Blockchain.


Unfortunately I think the best option currently is to download and build open-source extensions yourself and add them in developer mode, but Chrome will bother you about running developer extensions every time you start the browser so that's not ideal.

My own strategy is just to limit the number of extensions I use to a small number of ones from established/trusted entities, but it pains me because I publish a couple extensions and I am not an established entity.


You just need integrity hashes and public key signatures, not a blockchain.


blockchain + ipfs, sign an ipfs address with some kind of blockchain address. Though this still ends up with all the issues of verifying identity that you have with normal certificate authorities.


So turns out granting poorly-verified extensions permission to literally take over your browser after reading easily gamed reviews is a bad idea?

The worst of the spyware / download guys are now the best of the "legit" extension affiliates.

I don't get why googles verification team is asleep at the wheel on this, I feel the exact same way about amazon and their counterfeiters abusing the shit out of fake reviews and buy counts.


Umm... maybe they don’t want people using ad blockers. So having a malicious minefield may inhibit users from installing them. Maybe...


Not impossible. Will take this opportunity to remind people of the ad blocker that google censored from their web store: https://adnauseam.io/

This thing clicks ads for you so that parasites can't use data for anything meaningful. Check it out!


Surely all that does is make Google more money?


But it dilutes the value of ads in the long run giving 0 conversion (or whatever stupid term is appropriate). They took their shitty stand by blocking this adblocker specifically, surely there's something about it others don't have.

It could really be 'malware' of course, but my money's on the other possibility.


if it did profit them in the long run, they would not block it.


The only upside to a walled garden is that this type of stuff isn't supposed to get through. If it does, the store only servers to give users a false sense of validity.


As a developer of an extension I can attest to the fact that 'review' takes less than 1 minute when you upload new version. I think it's safe to say that there is no upside to the bs walled garden.


It's really hard to overstate just how easy it is to exploit the garden. The extension API itself limits the system-wide mischief you can get up to, but if you have an extension with 100k+ users (either because you created it or you bought it from its original developer), it's extremely easy to slip something malicious in there, and you have a lot of data at your fingertips to sell.

I wish there at least seemed to be some degree of review or reasonable sandboxing here. The closest they come is disabling eval-style behavior in 'background' scripts, but there's nothing stopping you from running command & control scripts from a remote origin in a non-privileged context and then getting up to your evil mischief anyway. Or injecting malicious code directly into gmail tabs.


There is an automated validation process and occasionally (very rarely) your update will get flagged for manual review. This will prevent you from uploading known malware samples and supposedly some other suspicious stuff as well. This might be good enough to prevent extensions from exploiting Chrome vulnerabilities. But spying on users or injecting ads into webpages? Even if the validation attempts to detect this kind of behavior, gaming it would be trivial.


You have to wonder exactly how these services promise to 'review' their apps posted there and seemingly have a ton of guidelines for what's allowed and what isn't...

Then just completely ignore said guidelines while not doing anything close to a review. Seriously, do none of these rules can checked at all?

https://developer.chrome.com/webstore/program_policies?csw=1...

It's no better on the iOS store, Play Store, Steam, Windows Marketplace or anything else of a similar kind. Poor quality, scam apps and programs seem to waltz right through 'quality control' like it's non existent.

Honestly, at this point the best 'walled garden' marketplace would probably be the fan game and mod equivalents. At least on the likes of MFGG and SMW Central you have human moderators physically test every single submission , then give detailed feedback on every single aspect of said submission (down to actual game design and mechanical implementations). Plus a perma ban system for anyone continually trying to submit crap.

Makes me wonder what the Chrome Store or the Play Store or Steam would be like if they did that. Probably better, and with less questionable extensions like this in it.


> Makes me wonder what the Chrome Store or the Play Store or Steam would be like if they did that.

You clearly don't remember when Valve Corporation required contractual agreements outside of the Steam Store to get sold on Steam. Greenlight, and its incarnations are all public options for selling on the digital distribution store.

Most games sold then were by bonafide studios who had to reach out to Valve's sales contacts, and not children operating out of their parent's bedrooms making mods or "games" on Unity.

The answer is that doing this doesn't make _money_.


There's a reckoning coming here unless Google gets around to actually getting Extension permissions under control. As currently constructed, there's not much stopping a malicious extension from getting at all sorts of important data stored in your gmail or dropbox tabs - just effort. Whenever I scan the featured extensions on the CWS home page, upwards of 25% of them request the permission to access websites from any origin, which means they can inject script in there and basically do anything they want. Part of this is convenience (it's an enormous pain to add permission requests to your extension after first install) and part of it is bad API design that makes it necessary to bake broad permission grants into an extension for it to do simple things.

At some point the people behind sneaking ads or tracking into extensions are going to pivot into higher-value scams, like harvesting account credentials or important data.

As Chrome is constructed now, there's almost nothing stopping any extension with the 'Read and change all your data on the websites you visit' permission from stealing literally any piece of data that moves through your browser if the person in control of it is determined enough. With the push towards running all your apps in the browser for "sandboxing", the risk this poses keeps going up. Companies use Slack, Gmail, etc to collaborate and all of those things are built to run in a browser tab (even if they have native apps) - and basically any extension a user installs has the potential to silently exfiltrate sensitive information, disguised as regular user traffic. Worse still, if the user signs into their Google account, the malicious extension can be synced to other machines. "Don't install stuff on your work PC" is pretty easy to understand, but "don't sign in to Google" is a bit harder of a policy to enforce, especially with the fuzzy boundary between Google-the-platform, Google-the-website, and Google-the-browser, all of which use the same login flow.

Native and mobile app development spaces have solutions for most of these issues already via sandboxes and permissions (though there remains work to be done), and these threats are non-existent when dealing with regular websites and web apps. Extensions need a lot more scrutiny due to just how much of a threat they pose.


> With the push towards running all your apps in the browser for "sandboxing", the risk this poses keeps going up.

Sandboxing cannot be relied upon a primary security feature; at best it's only an additional roadblock that provides defense in depth. Isolating potentially malicious code in a sandbox is useless if you also run the the rest of your software in that same sandbox.

The browser sandbox was useful for isolating transient Javascript the current page/window. Your primary apps and always-running utilities were protected because they were outside the sandbox.


You can configure Chrome to run each site in a different sandbox: https://www.chromium.org/Home/chromium-security/site-isolati...

The problem here isn't the shared sandbox, though, but that an adblocker needs access to every site to block their ads.


Yeah, precisely. I guess the core of my point is that we've moved many apps from running natively with user privs (with access to all the user's files, for good and for ill) to sandboxed websites - great! - meanwhile every chrome extension basically has admin privileges over those apps, and we're not really treating that with the caution we should.


One thing they need to watch out for is making it impossible for me (or people I trust) to do whatever I want with my browser. Maybe permissions could apply only to stuff from the Chrome Web Store, but not be applied to my own extensions or extensions I download myself?


It doesn't sound like the ad blockers are fake, only that they exploit their users. Facebook does this too. It's an ad company masquerading as a social network /jokes. Remember when Grinder was giving its advertisers the user's HIV status? That was certainly very wrong, but it is any different from an ad blocker that tracks their users?


This is what happens when you don't have a strict policy wrt what apps you allow in an app store. I like to think Apple would disallow these copied apps, for not being unique / being based off a template (there's app store rules against template based apps iirc).


We've come full circle, Chrome has become IE6 it seems.


I've already seen the bomb go off with an extension.

A top rated chrome extension was inserting porn Ads in Youtube's companion renderer element. Google removed it after I tweeted about it though.


I am very upset how I was unable to publish a very simple sheets add-on (the review process resembles the academic peer review) and all of a sudden is full of published malware.


There seem to be certain things that can push you through a very aggressive chrome web store review process. Under most circumstances there is no review whatsoever, you're just waiting for your extension .crx file to get pushed to their CDN (and maybe scanned automatically for trivial classes of exploits?)

If you get hit with a DMCA claim (fraudulent or otherwise) that'll put a manual review flag on your account for a while. You can tell because pushing an update or new extension takes over a day vs the standard ~20-60 minutes. The manual review can and will just reject you for no reason without much explanation, but it seems to be easy to just bypass the review.


The 60 minutes delay is completely artificial, it is only there in case you change your mind. If you use the API, extensions are published immediately. Whatever automated scanning takes place there, it's very quick.


I refuse to use any extensions due to this exact reason.

Good thing I like ads.


If I was a bad guy, the next thing I'd try would be to diversify my repertoire of popular apps as much as possible. You cannot be too popular to get their attention, still you want to be reasonably popular that the net gain is positive so that you can continue. I don't know how it can be done, but that's the direction that people would be going for.


Why not have Google provide an option to notify users of pulled store content and optionally remove it for users?


Chrome's current extension update model isn't well-suited to addressing problems like this. First, updates don't happen very often - in my experience, once or twice a day at most. So even if a malicious extension is caught quickly, users will likely remain exposed for as long as 48 hours. If an extension is pulled from the store (DMCA or otherwise), presently Chrome does not warn you or remove it, it just quietly stops getting updates. If Google starts doing remote disable/remove for extensions, there would be some angst over this and the question of when it will get used (court orders from a foreign country? etc.)


The only Chrome installs allowed on my network have all extensions disabled by policy. I strongly recommend anyone who uses Chrome look at outright disabling the functionality, it's used for more harm than good these days.


Google removing the extensions is good but aren't there be legal forces to be used against this also. Find the publisher and jail them. They are clearly doing illegal cumputet access to the user of the plugin.


Good luck trying to get the publisher's name from Google. Google knows which credit card was used to pay for the publisher's account, but they won't tell anybody - and I doubt that they are interested in going after the publisher themselves.



does anyone have a link on the tech sideo of how it works? I've read somewhere (don't remember where) that they used an image to inject code that is executed on the browers. Not that i want to develop a botnet, i'm just curious to know how it works.


Yes, the original blog post has a more technical description: https://blog.adguard.com/en/over-20-000-000-of-chrome-users-...


But hey! The extension-files are X509-signed and served over HTTPS with a green location-bar from a known good domain so we all know it’s secure, right?

Oh wait. I guess forcing everyone to use HTTPS and signing everywhere means HTTPS and signing can no longer be used to distinguish serious actors from even plain malware-vendors.

Thanks Google.


It could never be used for that purpose. It only allowed you to easily distinghuish between amateur malware and slightly-serious malware.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: