Hacker News new | past | comments | ask | show | jobs | submit login
Browser extensions are underrated: the promise of hackable software (geoffreylitt.com)
320 points by gklitt on July 29, 2019 | hide | past | favorite | 186 comments



> The modern browser extension API has done a good job balancing extensibility with security

No, it hasn't. Almost every single extension I install tells me some variant of "This extension can intercept and modify all of your browsing traffic". That's not "well balanced", it's completely broken. This is happening clearly for extensions by well intentioned people that do not need those permissions. I can't help but cynically interpret the current situation as intentional on Google's part because having the security model be "trust Google to vet the extensions" happens to centralise all the power with them. If you can't trust an extension from the wild then they might as well not exist, right?

People laughed Java out of the browser because it took 500ms to start, but at least it had an actual security model.


+1000

Extensions should be able to have their permissions limited by domain (e.g. to customize YouTube or Reddit) at a minimum.

And I'd also really like a way to track both injected scripts and elements so that they wouldn't be able to make any HTTP requests without additional permissions, not even an <img src="..."> tag if the src isn't just a data URL or local extension resource.

E.g. I want to be able to install an extension that stops YouTube videos from playing as soon as I navigate to the page, without worrying my entire browsing history or worse is being sent to a third-party.


> Extensions should be able to have their permissions limited by domain (e.g. to customize YouTube or Reddit) at a minimum.

They already can: extension authors can specify that their extension only operates on specific URL's.

The problem is that most extensions are designed to work on all web sites, so you have to choose between security and convenience.

Most users pick the latter and trust the former.

This model works pretty well overall since harmful extensions never take long before getting flagged by the community.


> This model works pretty well overall since harmful extensions never take long before getting flagged by the community.

No, this model works pretty well for stealthy extensions which take malicious actions without getting detected.


It also works well for normal useful extensions. We can figure out ways to eliminate stealthy malicious extensions without removing the most useful feature of extensions.


Nacho Analytics just shut down after slurping all full URLs from 2 million extension installs for over six months.


> Extensions should be able to have their permissions limited by domain (e.g. to customize YouTube or Reddit) at a minimum.

This seems like a good idea until you realize that the behavior you might want to modify is coming from a different domain loaded by the page and you have no control over how they set that up.


So the app should ask for a permission to a reasonable set of domains, and when you're installing it, you should get a clearly laid out permissions / privacy risk management worksheet to look at and agree to.


Many sites use dozens of domains. Some (probably most) are ads, tracking, and the like. But much of it is stuff needed to run the site. I don't think there is a reasonable set of defaults other than things not on an ad blocker blacklist. And asking the user to approve each domain on a page is too much -- how would they decide, and how would they know which one prevented the site from working properly?


What if that's fundamentally a flawed way of building things? Maybe your site should provide all of the assets it needs...


It's certainly flawed, but providing all of the assets creates a different class of problems. Both approaches have pros and cons, plus the original comment I replied to passed it off as a solution to the problem at hand, which it's not.


That would indeed be nice, but currently the domain list is hardcoded in the json file.


>should be able to have their permissions limited by domain

Does Chrome not support this? One more reason not to use it.

https://i.imgur.com/CJT8zsE.png


It does. For instance, here are the domains my reddit app is allowed access to: https://i.imgur.com/lRetX62.png

The non-reddit domains are for expando capability.


> Extensions should be able to have their permissions limited by domain (e.g. to customize YouTube or Reddit) at a minimum.

This would break adblockers, which are by far the most commonly used extensions


The standard for plugins on windows was for a long time "go to shady website and download a dll with arbitrary code. drop it in the magic folder to run it with full permissions"


I just had a horrid flashback to ActiveX plugin installation wizards in Windows 98...


extensions could just as easily modify executable you download, or source code.


>I can't help but cynically interpret the current situation as intentional on Google's part because having the security model be "trust Google to vet the extensions" happens to centralise all the power with them.

Isn't the situation the same with Mozilla? I thought that the reason was that most extensions do, in fact, need to be able to view and modify all your web browsing traffic. For example, my essential addons are uBlock origin, uMatrix, and Tree Tabs. Clearly the first needs to modify web pages, the second needs to intercept traffic, and the last needs the entire list of web pages I have open.

Can you give an example of an extension which requires permissions it shouldn't need?


I think there's a lot of room for finer-grained permissions. Why is it that removing (uBlock Origin), adding (Stylus), and modifying (Privacy Badger) elements all require the same permissions?


Because all three of those are, at the core, the very same thing? Whether removing, adding, or modifying, you're changing the contents of the HTML/CSS.

Whether it should be done in a more finely grained way, not sure, but if you have the permissions to modify then by definition you have the permission to remove or add content.


This is fear mongering. Every website could try 0-days exploits or drive-by downloads . Every app can abuse its permissions and track you/upload your photos/eavesdrop on your conversations. Every neighbor you have could spy on you through your windows/note when you come and go/follow you. Your grocery shop owner can poison your food. Etc etc. You can't blindly trust anything "from the wild" yet you can't really live without it. That's why we naturally do give trust by reputation, but at the same time remain vigilant.


>Every website could try 0-days exploits or drive-by downloads

That's an unfair comparison. Any piece of software could try and 0day you. The point is that in a permission-based system, the permissions for browser extensions are in practice far too permissive to the point of being broken.

>You can't blindly trust anything "from the wild" yet you can't really live without it.

The point about permissions is to provide granularity to trust. I may trust an app to use my camera without trusting it to track my location in the background.

>Every app can abuse its permissions and track you/upload your photos/eavesdrop on your conversations.

This is the best comparison - mobile phone permissions - and on this front browser extensions are far worse. The majority of every single extension I install wants complete control to everything. In contrast, most apps only require a few things as appropriate. Yes, there are those flashlight apps which require _every_ permission, and those are basically the standard of extensions.

It should be added that UXSS (basically a malicious extension) is basically an RCE in the browser which in some cases is more beneficial than a full RCE, e.g. easier to steal banking creds.


We still have locks on doors.


Recently I started an extension which ran into a need for that 'everything' permission. IIRC it has to do with tab management. I imagine browsers could limit requests for tab access by URL patterns, so not sure why they don't yet.


500ms to start but like flash, download site in 20s before anything but splash screen.


To me back in those days:

Java applet loading felt like 30 seconds of staring at a gray reactangle while your entire browser ui locks up, sometimes ending in "applet uninited"

Flash game loading was just a few seconds of a black rectangle, browser ui did not lock up, then custom loader, and it usually worked.


I believe many people should attempt to create their own web extension, even if they don't publish it.

In my younger years, I used to crack and hack software just for fun. Those were my Softice years. Later, when Opera was not Chromium based, I also had several site customisations, since it was very easy to add my own JS and CSS to any web site.

Nowadays, I have 4 extensions created and tailored for my needs. One that deals with cookies (mostly "delete everything" outside of my white list) and three that add functionalities to specific sites (automating, managing lists, hiding or highlighting content, etc). Building them was fun, though not as much fun as playing against "copy protections" long ago: like going from competitive chess to creative DIY.

The only pain with custom made extensions is that Firefox is very reluctant to load them. I don't want to upload them them on some Mozilla server, so I have to enter some cryptic "about:..." URL, then click and navigate to my extension, for every extension at every browser start. This is one of the main reasons I'm using more Vivaldi than Firefox these past months.


You can work around this by running Firefox Developer, which is able to disable Mozilla's signing requirements. That edition of the browser actually respects xpinstall.signatures.required, so just toggle that, zip your extension, rename to .xpi, and install. It'll warn you once, proceed, and persist through browser restarts just like any other extension.

Obviously that's no help if you want to distribute the extension (we have to jump through hoops at our organization to push out a manifest) but for personal use I find it to be a good workaround.



I make extensive use of the Tampermonkey (for javascript) and stylus (for css) extensions. I probably don't have access to some of the browser's APIs but it is easier to create and edit scripts with them.


Seconded. Both of those plugins are excellent for end-user improvements, because they let you simply add code to be run on a given site or set of sites. No nonsense with packaging and building and programming environments; you can just write the code you need in the browser, as you need it.


3rded. I use the @updateurl and @version and save my scripts in Google Drive with a public URL. To update, I can just increment the version in the hosted copy, then reload tamper monkey. Has the bonus of quick and dirty revision history.


The irony is that I ported two of my old user scripts to web extensions. With old Opera, userJS was native, while TamperMonky is a (big) web extension. I had performance problems with TamperMonkey, so I rewrote my scripts to avoid this layer.

Pros of TamperMonkey/userJS:

- Allows live editing (with syntax highlighting) with immediate refresh, all in the browser. This is handy for minor changes on small files, but slow for big files.

- Direct access to some HTML5 APIs that are restricted in extensions. I don't recall if that was the case with TamperMonkey, but OperaJS allowed plain AJAX and WebStorage usage, just as if the file was included in the web page.

Cons:

- Slower than extensions.

- Less powerful (cannot add a button in a browser bar, etc)

- Painful as code grows. An extension allows simple splitting of code into several files. I found UserJS harder to maintain.

- Cannot load JS _and CSS_ like an extension does. Another extension like Stylus is needed.


I think this is a great idea. I maintain a personal extension as well. I've experimented with a few ideas for augmenting browser experience, but so far I'm mostly just hiding obtrusive elements on various websites (mostly by CSS, with some JS where sites obfuscate class names).

Even that would have taken multiple thirdparty extensions to accomplish - and probably would have required giving very broad permissions to them. Worth noting that Stylish, one of the extensions I might have used, was compromised with spyware a few years ago.

As for publishing, I don't publish mine because it forms a kind of personal fingerprint. The more I add to it the more personal it gets. I rather support the idea of everyone having their own custom extension. You can really improve your online quality of life and with the WebExtension API it's pretty painless.


Though Stylish was compromised, you can safely use the fork Stylus that was created in response: https://github.com/openstyles/stylus

I use Stylus to add custom CSS to sites and Violentmonkey (https://violentmonkey.github.io/) to add custom JS. They both make it fairly easy to start writing code for a new site. However, there is no easy way to set up both custom CSS and custom JS for a single site – a custom browser extension like you made could potentially support that better.


Is Stylish still compromised?


Aside of your cookie extension, the other 3 could probably have been done with userscripts. I often find a missing feature in a website that I add with userscripts. There's also user created userscripts for several websites, like a webcomic reader that preloads pages from several webcomic websites so you don't have to wait on the slow ones.


I believe Firefox Dev edition allows you to disable the signing requirement[0].

I do wish the requirements around signing were less stringent, but I'm fine using Dev Edition as a daily driver for now.

[0]: https://support.mozilla.org/en-US/kb/add-on-signing-in-firef...


But then you're using the beta channel instead of the stable channel. It's great that you're fine with that, but everyone shouldn't have to.

At the absolute least, Mozilla should make Unbranded auto-update.


I've been using FF nightly - the most untested version, hot off of git - for two years and have yet to encounter a single bug which I could not reproduce in normal FF (mainly rare crashes on certain sites some of the time).


With all respect, I think that Mozilla has a lot more testers than just one person. If the beta channel (or nightly) was truly bug free, it would be turned into the release channel.

In other words, there is presumably a reason the release channel exists. As long as it does, the desire to use the release version alongside unsigned extensions is reasonable, and there should be a supported pathway.


Besides having to do it on every restart, that’s a good thing. It should be convoluted to do unsafe operations to protect the average user but allow the advanced user flexibility.


That would be a good argument if using only the approved channels protected users, but a lot of extensions have malware that sell user data, including ones in stores. https://www.inc.com/jason-aten/the-browser-extensions-you-us...


I’m the last person who is going to defend the security architecture of browser plugins.

But, when Google tried to implement an ad blocking architecture that wouldn’t allow third parties access to your browsing history similar to that of Safari, geeks were up in arms.


The ad blocking changes also prevented blocking the request to the ad servers, which is what people were upset about (or at least, why I was upset). I don't remember seeing anyone be upset about anything related to browsing history.


No one was upset about a plug in not getting access to your browsing history. People were upset about the declarative ad blocking that took away some of the features that they ad blockers previously had and said Google was doing it to protect their business.

I’m not going to defend Google’s overall business practices, but from what I understand, it’s the same type of architecture that Apple has had for four years and no one said Apple’s intentions were nefarious.


Some also claim that Google kills innovation. Synchronous interception of requests allows developers to program sophisticated rules to fight malicious resources. In the future, they will only have a regular expressions list to block domains. It will be much easier for malicious actors to bypass this feature.


I want whitelist-only. No blacklists, unless that's something to be applied after the whitelisting step.

So, no -- Google's method is not sufficient.


Sounds kinda like a firewall would server you better. Not that I don't think you should or shouldn't be able to have this feature.


You really want an extension that limits your ability to access only a certain number of websites?


It’d be interesting to have one sandbox that can synchronously intercept requests, and one sandbox that can make its own requests, but only a one way communication channel between them. So the interceptor part can have its own stateful logic and access the blacklist, but not exfiltrate your history.


It also allows developers to intercept your entire browsing history. If you care about your privacy, why would you let a random third party intercept all of your browsing history?


I consider everything to be a matter of risk in terms for example of privacy. By installing an extension like uBlock, I am indeed taking the risk you mentioned. However, I consider that the risk will be much higher when installing an extension that is based only on a list of regular expressions for the reason I mentioned. If uBlock did a bad thing, I'll know very quickly and all I'll have to do is install an alternative extension.

I am also a developer. I would like to continue to have the right to code such extensions for myself.


So how many consumers would know if uBlock did something bad? Should we optimize security for the minority or the majority?


Why is better to fix adblocking capabilities at model that is borderline insufficient now let alone in 10 years.

There is arms race and you want adblockers (good guys) to give up any improvements in perpetuity, that is a recipe for losing.

Me installing uBlock for parents has improved their browser experience and security. I'd rather risk uBlock being compromised and having to phone them to uninstall it than have them being at the mercy of adtech, spyware, and scammers companies in 5 years.


How many incidents have we seen where the ad blockers are the bad guys? I don’t recall seeing any third party ads when browsing with Safari on iOS with 1Blocker that uses this architecture.


Edge for Android has that model. While trying it, I frequently got stopped by adblocker blockers. Firefox for Android runs uBlock Origin and never has that problem.


Google would remove uBlock from my PC before I even know it and I would notice it very quickly (unfortunately).

A personal computer has always been very complicated and using it has always been a risk in itself. I remember a time when a virus could damage the computer's hardware, and someone who knew how to program in BASIC (edit: or LOGO) was not considered as a "minority".

My point of view is that we need to make users aware of the risks and make them more responsible because things will not get any easier.

No need to optimize, I switched to Firefox, the browser for the minorities ;). I really liked Chrome though.


My point of view is that we need to make users aware of the risks and make them more responsible because things will not get any easier.

How has that been working for the last 30 years? Why would it start now?

Computers have been mainstream consumer appliances since the “multimedia PCs” were a thing in the mid 90s. Most people no more want to program computers than they want to fix their own cars.


With respect, Google is not Apple, and Apple is not Google.

I think a lot of people would say that Google is a pretty evil company in many respects, whereas Apple isn't exactly 100% saintly, but at least their profit and business goals align more closely with what is generally considered to be good for customers.


I’m definitely an Apple customer - our cell phone plan has 6 devices and we have two AppleTVs. But playing devil’s advocate, Apple’s business model only allows a small percentage of upper income people globally to be able to afford their products. Android and Google have done a lot more to bring computing to the masses than Apple. The cheapest iPhone that you can buy is $475 over $200 more than the average selling price of an Android phone.


iPhone 7 is $449 on Apple's website.


$26 less. That really makes it a lot more affordable compared to a $200 Moto G.


Google never did any such thing.

The proposed change that would have made ad blocking impossible still allowed non blocking request interception, it would have absolutely 0 impact on people trying to sell your browsing data via an extension, and a huge net increase in people selling your browsing data by people selling your browsing data by website embedded trackers.


Not sure if you're talking about the recent Chrome drama, but if so, "geeks were up in arms" because the new architecture essentially neutered ad blockers through the limits imposed on the block lists.


That’s worked well for Safari for four years.


It's worked well on safari because most of its users are people who don't care about those limits. Traditionally safari has been a relatively closed ecosystem (I haven't used it in a long time but I remember an entire lack of support for extensions at one point), so the people who would care about these changes never used it to begin with.

So now, when safari comes in with these changes that only add value compared to their previous offering (which again was substantially behind the competitors), users respond positively because the only users that remain don't know the difference between blocking the rendering of an ad and blocking the request to the ad server. So, when they hear "Ads are blocked", they don't understand the nuances that reveal to you that ads are not really blocked at all from a privacy perspective.

The reason Chrome didn't have a similar response is because the people who cared about these changes were already using chrome. So, when chrome announced an update that removes these privacy-protecting features, the users were knowledgeable enough to realize what the changes actually meant from a privacy perspective, and so responded poorly.

And even if the users were the same, safari added a feature (you can now kind of block ads in safari, compared to the zero adblocking you could do before), whereas chrome is removing a feature (you can no longer block requests to ad servers, something you could do for years). So of course the reaction to safari will be at worst lukewarm, because compared to the previous editions of safari it was an improvement.


Safari had plugins from day one. As far as I know. Here is an early plug in from 2010 ( https://www.cultofmac.com/47232/macheist-tweaks-gruber-with-...)

But speaking of “closed”, where is the ad blocking extension for Chrome on Android and embedded web views?


It's nowhere, which is the point. I bet if they were to add manifest V3 (the thing people are upset about) to chrome for Android, it would be received much more positively than this.


This is what I meant. The content blocking framework that works with both Safari on iOS and the newer web view has been around for four years.


Adds friction. Ideally user-made extensions should be more trusted and easier to work with than Internet-sourced ones. I can understand that in practice, if user can make something, the user can also be convinced by an attacker to selfpwn through, but we ought to accept some level of that risk.


Why is adding friction for advanced cases a bad thing? How is the browser suppose to know whether the user wrote the extension or downloaded it from the internet?

We already see what happens when users download extensions and toolbars willy nilly.


Depends on your goal.

If you want the computer to be "bicycle for the mind", you want to reduce friction so that "advanced" use isn't really "advanced", but normal. See also Hypercard, or how people use Excel in offices, or secretaries that extended Emacs because they didn't know writing Lisp was "programming", or countless other stories of end-user improvements.

If you want the computer to be a digital television set, or a digital collection of appliances, then sure - let's lock everything down, so that you can only do what you're allowed to by the vendors, and only through means allowed by the vendors. This is the scenario in which you want to add friction to end-user "advanced" use.

> How is the browser suppose to know whether the user wrote the extension or downloaded it from the internet?

It cannot be done in general - if a user can do something manually, a sophisticated piece of malware running outside of the browser can simulate too. But I think there's ways to add warnings without increasing friction. Having to manually re-enable each and every user-created extension on browser restart is IMO way too much friction. Being shown a warning about those extensions on each browser restart, but keeping them running sounds more reasonable.


Computers stopped being for enthusiasts by the early 90s - about the last time HyperCard was updated coincidentally. I remember the change well, it was around that time where the back of the InCider magazine stopped including code listings for interesting assembly language extensions for Basic and started listing “power users” tips.

Yes, I agree that having to re-enable extensions every time is too much, but going through contortions once isn’t.


I feel like having a big scary warning would be sufficient instead of making it inconvenient. It's easy for a competent user to ignore a warning that they fully understand while it scares off those that are clueless.


The issue isn't about having a sufficiently scary warning. It is that the browser has to store the fact that the user has agreed to this warning somewhere (ie, presumably in the user's profile). That means any other software running on the computer with regular user permissions can make the same modification to the profile and then install an unsigned extension without the user's consent.

Typically, when Mozilla finds out about software installing extensions without user consent, the extension is added to the blocklist, but if the extension is unsigned it can just claim that it is ublock origin or adblock plus or some popular extension, leaving no practical way to block it.

This is described in greater detail at https://blog.mozilla.org/addons/2015/04/15/the-case-for-exte...

(in full disclosure, I am a Mozilla employee)


Ah thanks, that's useful context. I was responding primarily to the claim that convolutedness is necessary to protect unsophisticated users, which I interpreted as meaning that the complexity of the UI scares users off. But this is a separate technical limitation that makes a lot of sense.


It's also easy to ignore a warning you don't understand. I know from helping family they don't particularly care what warnings say - they have work to do, and they do know clicking OK makes it go away.


People ignore warnings - they have since the dawn of personal computers. You remember Vista UAC?

Why optimize for the 1% instead of the 99%?


UAC was simultaneously seemingly ubiquitous and not a particularly "scary" warning. It just sounded like bland computer jargon to the uninformed. I was envisioning something like the https warnings that Chrome gives: a clear warning sign that uses emotionally loaded terms like "go back to safety" and hides the proceed link under the fold.

That being said, it's entirely possible that I'm overestimating the intelligence of the average user and that many of them wouldn't blink even in the face of a warning made up to look scary.


I've been on both sides of the fence in this one, and agree with both arguments. Perhaps FF could add (if it doesn't have it already) an about:config setting that allows reloads, perhaps from a specific folder(s).


It’s really not that hard to have a setting that allows an extension to run every time and hash the extension. If the hash changes, it’s disabled until you re-enable it.


As another anecdote, I made a couple of Chrome extensions years ago to improve my development workflow:

- One would switch to an existing tab with the same URL, instead of opening a duplicate. This made it easier to click URLs in error/debug messages without making the browser unmanageable (IIRC this was just a copy/paste of someone else's extension, which didn't work without me fiddling it)

- One would add keybindings to the output of Drupal integration tests. These tests would say things like 'visit X', 'click the Y button', 'enter Z in the form', etc. and would save each page, so if a test failed each step could be viewed in a browser. The only problem was this is tedious, so I made an extension which bound the left/right cursor keys to stepping through these pages. Made it much easier to skip through the setup and get to the bug (e.g. adding items to cart, when the problem is in checkout).


> One that deals with cookies (mostly "delete everything" outside of my white list)

That sounds a lot like the "Cookie AutoDelete" FF extension.


Browser extensions are also really important for accessibility. People with many kinds of disabilities use extensions to make websites more readable, easier to navigate, or more accessible in other ways.

Unfortunately, the big mobile browsers do not support extensions, which is a huge blow to accessibility. I think Firefox for Android is the only mainstream-ish browser that supports extensions. Apple prevents them from doing the same on iOS because it would be considered "an app store within an app", which is forbidden.

The only thing Apple allows is action and share extensions, which have to be manually activated on every single page (2-3 taps to do so — which is super user-unfriendly, esp. for PWD). It's great that Apple does a lot for accessibility in general, but I really wish they would open things up a bit more so that users could customize the iOS experience to make it more accessible.

As a dev, I would be more than happy to have my code scrutinized even further in order to ensure that what we're doing doesn't create security, privacy, or performance issues. We'd just like to make our accessibility software as useful for folks on mobile as it is on desktop!


Which specific extensions work well? My wife is visually impaired and she has tried a several extensions. All of them have cause more problems than they have solved. In addition, IE and firefox's attempts to change behavior when using windows high contrast mode also breaks many sites. Safari is the browser that works the most reliably.


Well, I'm the founder of BeeLine Reader[1], and our extension is used for speed reading as well as accessibility (vision impairment, dyslexia, ADHD). I don't think it breaks websites, since we let the user decide how aggressively it should try to run. There are also night mode extensions, as well as site-specific ones (like for wikipedia) that are great.

1: http://www.beelinereader.com


"Dark Background and Light Text" for Firefox has worked for me. Some necessary background images are still invisible, though I chalk those up to bad implementions.


Extensions can be uninstalled, revoked, disabled at will. Can't really bend BigTech to do your biding, and that trumps whatever the security argument brings to the table, imo. Extensions should be done in a security friendly way [0], and not the other way around of making software secure by disabling all extensibility [1].

Take the example of the Android ecosystem: If plugins were allowed for apps, pretty sure there'd be a better story around privacy today. An astonishing 40% of connections from an Oppo/Vivo or Xiaomi phones are to ad networks and trackers. And there's nothing you could do (without root) except to firewall it (apps have started working around pi-hole esque setups). XposedMod has brought plugin based development to Android [2], but it is niche and requires not just root, but replacing key framework components. Using it might still be worth it, though, given the relentlessness of OEMs and carriers.

And that's just sad.

[0] One way to tackle the problem of developers selling away rights to their extensions is to legally make it binding to publicly declare whenever ownership changes hands. Disable extensions across all installs, and let the users enable after the fact is made obvious to them.

[1] https://www.eff.org/deeplinks/2019/06/adversarial-interopera...

[2] https://www.xda-developers.com/best-xposed-modules/


> If plugins were allowed for apps, pretty sure there'd be a better story around privacy today .

Honest question : Can you expand on how this would work please?

If anything, extensions as in chrome extensions is something I try to avoid as much as possible : giving access to all of my data to a third party extension promising that is going to increase my privacy but that I need to trust 100% with a complete access is less than ideal.


> Honest question : Can you expand on how this would work please?

Such a thing is already possible today. Some require root, some require breaking PlayStore's terms of use.

One such example is: XPrivacyLua [0] by the creator of NetGuard. It helps fake location data, hide contacts and calendar, fake device-id, IMEI, MAC addresses etc on a per-app basis.

Another example is how VPN in Android [1][2] is widely used to block trackers and ads.

A third example would be how the accessibility service APIs are (ab)used to temporary grant permissions to apps [3].

A fourth would be reversing engineering tools like Frida [4] that help with inspecting apps, and even change their behaviour.

A fifth is repackaging APKs with advertisement and tracking code removed, like with YouTube [5].

I am attempting to build an app with most of these features combined in to one, lets see how far I get. My aim is probably to build something as close as possible to uMatrix/uBlockOrigin but without requiring root.

---

[0] https://github.com/M66B/XPrivacyLua/blob/master/README.md

[1] https://github.com/M66B/NetGuard

[2] https://github.com/blokadaorg/blokada

[3] Sam Ruston's Bouncer app: https://samruston.co.uk/

[4] https://securitygrind.com/bypassing-android-ssl-pinning-with...

[5] https://youtubevanced.com/


I just want to add that Android has a barely documented feature "Resource Overlay", designed to allow OEMs to customize stuff. [0] is a quick tutorial, [1] a bit longer introduction. The focus on configuration instead of code should make security a bit easier. Obviously they made it near impossible for even more advanced users to use, so opening that up a bit would be a good first step...

[0] https://code.tutsplus.com/tutorials/quick-tip-theme-android-... [1] https://developer.sony.com/posts/sony-contributes-runtime-re...


None of these answer my question though.

Repackaging, leveraging a security flaw to use xposed, etc, all of these add more vectors that can compromise your data.

You need to have complete trust in the person that wrote these, way more trust than just in the creators of an app that can just use the permissions you give them :/


I agree. My point was the security risk is worth it if extensibility is achieved. I cited the example of browsers and content blockers.

There are many intrusive permissions that already are major privacy and security risks-- Launchers, SMS apps, VPNs, and even alarm clocks that mine location data. The playing ground isn't level, right now, to counter this intrusion.

One way to affect what other apps do, without root, is to route the traffic via VPN and firewall as appropriate. That's possible only when a user enables a VPN to do so. Similarly, the plugins could also require a user to explicitly grant or deny permission for them to work. This is enough of a security measure as its on par with the current system in Android (regardless of its notoriety).

> None of these answer my question though.

May be I understood you wrong. I hope I made my point clear to you above?


Sometimes I like to inspect the code of existing addons to look for malicious code. My first sweep is regexing through the code for any URLs which means checking for (http?s://) and the next step is looking for any code that is deliberately obfuscated.


I built http://sublim.nl/crxviewer/ to do exactly that -- load in an extension and view the source, including doing a search through all files.


Extensions are awesome but I think this article is a bit too optimistic. I mean I share the optimism but in practice a major challenge is the platform.

Chrome for example has a ton of limitations:

https://getpolarized.io/2019/04/05/Google-Will-Kill-Chrome-E...

If you want to do anything significant you have to get their 'permission' and at that point they throttle your extension release updates.

You can't just push an update immediately that gets sent out. They take a week to approve your extension.

This might sound reasonable until you realize that a week is an eternity for a continuous development shop. That might as well be a year.

ESPECIALLY if something is broken.

Imagine if you had a bug that destroys data and you need to rush out a fix. Nope.. You need to wait one week for that to go out.


"This might sound reasonable until you realize that a week is an eternity for a continuous development shop. That might as well be a year.

ESPECIALLY if something is broken."

I get what you are saying, but this is more of an artifact of people being used to unregulated platform as a service and the - being blunt here - terrible quality practices that are rampant there. If a week is an "eternity" then try internally iterating a few times and shore up that test coverage if that extremely mild release cadence constraint is more than you can bear. Browser extensions are risk in security, authenticity of content, a new avenue for phishing (look alikes in low volume/poorly curated areas), and browser stability.

It isn't Chrome's fault if someone pushes a bug that destroys data, it's the development teams fault. There responsibility is the the user of Chrome.


Personal anecdote here, I have a Chrome extension with a few thousand users that has never been subject to this wait. That seems to be because I have a narrowly defined scope for my extension and therefore the permissions I request are relatively innocuous.

That exact article you linked was posted to HN a few months back. I remember someone dug into it and found the permissions it requested. The full list was rather broad and as a result the extension could have basically hijacked the entire browsing experience. A malicious extension with those permissions would have been a potential goldmine. I think it is perfectly reasonable for Google to want to review extensions like that.


> This might sound reasonable until you realize that a week is an eternity for a continuous development shop. That might as well be a year.

Not so long ago, people distributed software on physical media. Time to update was counted in months to years. This had a nice side benefit of people not being able to "test in production"; software either worked mostly well, or it didn't sell. That wasn't a bad thing, because it forced companies to do actual QA - something that today is increasingly being pawned off to end-users, with help of deeply invasive telemetry.


The same applies for my own fork of SingleFile, called SingleFileZ. I can publish any update of SingleFile in 1 hour. For SingleFileZ, which requires less permissions, I have to wait 7 days for exactly the same update. That's very demotivating.


I experience the same one week delay with an extension. The publishing delay is always about a week, and that is highly unusual.

They state that the extension update is under compliance review, which may take several business days, though the similar approval times indicate that this is actually an arbitrary publishing delay, and that no human review takes place in all cases, otherwise there would be more variation between approval times.


It's always a week AND it's also applied at a week even if you change an image. They don't need to audit an image.

I'm suspicious that they might be doing this for other reasons more likely to punish people for more aggressive permissions.


That has been the reality for the Apple AppStore for many years. Seems to not have hurt them. And Chrome has a much larger market share (or browser share) than iOS has.

As long as you have that, you can pretty much do whatever you like to the devs as long as you don't piss off the users


Oh? How many billions of users are on Chrome?


Are you waiting for google to release a mobile operating system and put their browser on it?

That'd show Apple.


At least one, as of 2016.


at least in firefox, updates typically roll out immediately while any manual review happens asynchronously


While I disagree with the 1-week review times that google imposes, doesn't the firefox approach defeat the purpose of the review in the first place?

While async review is better than no review, if someone pushed a malicious update and it got caught in the async review a few days later, the damage has already been done. Just a trade-off to think about.


its a tradeoff for sure; i tend to think the risk of unpatched software outweighs the risk of software hijacked from the author, so i prefer the moz model. iirc, there are automatic checks (ie permission changes, certain api calls, etc) which trigger a manual review before publishing. that said even the manual review is no guarantee malicious software doesnt get published (from my experience the reviewers are not always experts)


I launched my side project (https://www.checkbot.io/, a website best practices checker) as a paid Chrome extension and have been happy with the experience so far. The browser extension platform lets me easily support Linux, Chrome OS, Mac and Windows, with automatic updates and small installation size (~1MB). I get traffic from people discovering the extension via the Chrome store, and users can install and launch the app in seconds which lowers onboarding friction.

Compared to native apps, I think browser extension based apps reduce a lot of headaches for developers and users.


Looks useful, how do you manage distribution and the payments? I am looking for examples and resources of successful paid extensions who offer subscriptions https://readmo.app


I use https://paddle.com/ for subscription payments. Paddle deal with EU VAT for you which is a big benefit for me.

The Chrome store lets you take payments but it's not very flexible and it's tied to Chrome.


ah sweet, so the extension is hosted on the Chrome Web Store but you manage payments and licenses with paddle?


That's right. This gives more flexibility like selling a license that works on Firefox + Chrome, and selling team based licenses.

I wouldn't be surprised if Google killed Chrome store payments in the future as well - I pretty much never hear mentions or updates about it. I wouldn't want to lose all my subscribers if that happened.


Makes perfectly sense. Thank you for the tips!


Not just browser extensions. I miss an API in all kind of software we use at work.

No wonder the half of the companys in this world run on excel.


Me too.

It's a thought I keep repeating that is probably worth expanding into an article - modern software eschews interoperability, and in particular, SaaS is based on preventing interoperability. What used to be a desktop application operating on an independent source of data (filesystem) now vacuums the data and offers it back over an official interface and an extremely limited and locked-down API.

Wrt. those APIs, note that what just a decade ago on desktop was considered normal interop, nowadays often requires the interoperating parties to sign contracts, adding a legal dimension that further shuts out end users.


I am seriously considering learning emacs. As a lazy IDE guy, it feels like yak shaving and a lot of work, but I can sort of see how having that much control over your environment would be great, and setting up some cool workflows to make stuff quick and easy in the long run.


Browser extensions are being underrated deliberately by browser developers. Ever since we lost XUL Firefox, anyone who wants to really do anything worth doing around a web browser should have already switched to Pale Moon. Doubly so with Google's Manifest v3, which is going to kill selective content download management.


Even as much as I want my old Firefox extensions back I reaaly don't feel I can trust a small bunch of developers to keep something as complicated as the old Firefox patched in this day and age.

Am I wrong?


As far as my user experience is concerned, yes, you're wrong, and that's okay - you're working from opinion & I'm speaking from personal experience only. I've had no security breaches of my PC since I switched to Pale Moon. I've also had an overall better experience than with any multi-process software of any kind. I understand your concern, but all I can say is, try it out. Keep it sandboxed (as you should with all port-accessing software tbh) if you don't trust it. If you have cause for a genuine security complaint, then say so. But please do recognize the target audience for PM before asking about introducing the latest tech widget or WebDRM.



Sounds good! I'm all for more competition in the browser space and will consider using it for my personal laptop as a first step.


not even a little bit. the advances that were made in multi-process firefox and reducing memory usage and speeding up firefox are all on the backs of webextensions existing. it freed up the developers from having to worry that some internal api getting changed will break the extensions. it simply had to go.


Memory usage and speeding up firefox have nothing to do with keeping the browser patched. Plenty of people thought it was fast enough and fine with memory for their use case, but very few people will be fine with gaping security holes.


> Memory usage and speeding up firefox have nothing to do with keeping the browser patched

yeah? they are tangential goals but that's the problem. in moving away from browser extensions to webextensions you have a completely diverging codebase that's almost impossible to keep patched because the architecture is fundamentally incompatible and the patches will not be able to be applied in all but the most trivial of cases.


You're pointing out one of the main complaints that's led to PM use. The codebase of electrolysis-Firefox & XUL-Firefox have diverged to that degree. And some of us have found that single-process function and customization features are what we want, not Chrome-lite. For my part, PM loads faster and works longer without crashing my PC than any of the multiprocess browsers.


Next (https://github.com/atlas-engineer/next) might be a truly hackable browser.


All I really want is emacs for the web.


I don't agree with the author. He pays lip service to security being important, but then proceeds to ignore the threat because he thinks extensions are great. I think people should be more hesitant to install a browser extension than just about any other piece of software.

The threat is absolutely real. Bad actors regularly offer large paydays to lone developers with popular extensions so they can roll out an update that quietly adds a backdoor.

There's at least some publicly documented evidence that Raymond Hill (uBlock Origin) isn't likely to cave to this sort of pressure, but do you really believe that none of the other authors of your fifteen favorite extensions would look the other way for $100k?

Keep in mind that these offers don't look like "Here's some money, please let us roll out an evil update to your extension." They look like "Our company has a product with a similar name. We love your extension and would like to offer to acquire it from you so that we can use the name. We'll even let you keep the rights to your software so that you can re-release it under a different name if you'd like!" They'll make it really easy for the developer to remain in denial about what they're actually facilitating.


> Keep in mind that it's not "Here's some money, please let us roll out an evil update to your extension," it's "Our company loves your extension and would like to acquire it."

I get messages like this every now and then for mobile apps and browser extensions I manage and they're painfully obvious to spot.

They're often from sketchy looking generic email addresses, have no information about the buying company, don't even attempt to demonstrate knowledge of the app, and most importantly mention nothing about how they plan to grow the app.

It's always just "would you like to sell?" and "how many users do you have?".

A recent one:

> From: *@gmail.com

> My name is John.

> I have noticed your google extension "https://checkbot.io/ Checkbot: SEO, Web Speed &amp; Security Tester ", its looks interesting would you considering to sell it?

> Best regards

I've had more elaborate ones but they're always generic with no obvious business interest in the specific app they're asking to buy.


Same, I get an offer every month or so for some of my extensions. One has quoted an offer of $0.25/user for the Firefox version of Search by Image in their introductory email. That kind of money would significantly improve my life, but it's all too obvious what they would do with my users.

Most extension developers are not getting any significant income from their work, despite serving millions of users, and unless browser vendors will begin to recognize the value of that labor and provide better tools for sponsoring developers, we will continue to be vulnerable to such offers.


> Most extension developers are not getting any significant income from their work, despite serving millions of users, and until browser vendors do not start to recognize the value of that labor and provide better tools for sponsoring developers, we will continue to be vulnerable to such offers.

Any ideas what specifically could be done to help? You can integrate your own payment systems into extensions and there's some ad vendors that support browser extensions.


I'm specifically talking about extensions that strive to be free and accessible for everyone.

A great example for how Mozilla is deprioritizing contributions for extension developers is the new design of Firefox Add-ons. The contribution button was pushed down below the fold. Previously it was at a very prominent place next to the install button, and several modes for requesting contributions before or after installation have been deprecated.

Yes, adding a donation button to your extension is a possibility, but it would send a whole different signal if Mozilla would encourage users to support developers, from a unified and trusted user interface, and experiment with better ways to ensure that users are aware of support options for their favourite extensions.

Mozilla directly supporting extensions which they consider to be of great value could also be explored.


Make paying for them simple and easy. The app store concept works very well for a reason, a billion individual one-software web stores don't.


Google could pay developers for popular extensions.

The money doesn't have to come directly from the users.


> He pays lip service to security being important, but then proceeds to ignore the threat because he thinks extensions are great. I think people should be more hesitant to install a browser extension than just about any other piece of software.

I think he might not be willing to engage with this further, because extensions sit close to the border defined by the minimum of "security + usability"[0]. Making extensions more secure eats into their usability; to resolve all the security issues, you'd have to kill extensions altogether.

> Bad actors regularly offer large paydays to lone developers with popular extensions so they can roll out an update that quietly adds a backdoor.

I don't think there is a way to avoid that. It boils down to the rule that security is measured in dollars - the ultimate attack is bribing the controlling party; the ultimate defense, making it not worth it for the controlling party to sell out for any amount of money the attacker could assemble.

--

[0] - I.e. after accounting for obvious wins-wins, you reach a situation where security is opposite to usability. Once you're there, it becomes a trade-off.


Raymond Hill (uBlock Origin)

It's uBlock Origin because the uBlock name itself got taken over by swindlers. So even well-intentioned actors aren't immune to this kind of abuse.


You almost need a Hollywood action movie trailer for this story (i.e., hero going into peaceful retirement, bad guys kidnapping his child and taking over, hero is back and dispensing own brand of justice, to save everyone).


Worse still, the design of the most popular extension API is intrinsically poor at protecting users from malicious extension developers (or extension resale to malicious developers). Until Google and Mozilla fix the design people shouldn't be installing more than a handful of extensions, because they end up usually having complete access to all your data for convenience reasons.


Extensions have a similar model to phone apps. They have a basic set of APIs available, and beyond that they have to ask for permission before they can use more. E.g. I have a Firefox addon called "t.co unmangler" that only has permission to access my data for twitter.com and can't read anything else. If an extension is compromised and tries to access more that it was allowed before, the browser will block it until I grant permissions.


The problem is the granularity of the permissions is extremely coarse, so if you scan the top extensions on the chrome web store a vast number of them request access to All Pages, HTTP and HTTPS included. This is because various useful primitives require that access.

Another problem is that requesting a new permission in an extension update has extremely bad UX, to the point that any thoughtful extension developer will just ask for everything they could possibly need up front. If you request a new permission Chrome just silently shuts off your extension until the user goes digging around in the UI for the permission request. The last time I tried this about 5% of my users figured it out and the rest thought it was broken.


This is not limited to Chrome extensions. If you build and publish any kind of software, there is a high chance that someone shady will come along and offer you money for it, or to become an "affiliate" partner.


Right. Extensions are just software and no need to pretend they have it worse than apps in app stores.


What about a review service ? N peers => signed extension.


For any number of N, it's far easier to get N positive verifications by trusted peers for a crook (which can buy or fake them) than for a honest small developer.


roll out an update that quietly adds a backdoor.

You can turn off automatic updates on an extension-by-extension basis.


these same risks exist with any app. totally agree its a valid concern and deserves more than a footnote, but i don't think that counters anything that the author is praising about extensions specifically.

at least extensions exist within an ecosystem where they are subject to manual review and approval / removal. and in terms of updates, any changes to the permissions show a prompt to the user as though they were newly installed


Manual review doesn't work. The volume is too high, and subtle trickery is too easy.

To first order, there is no permissions model for browser extensions. You should assume that an extension can see and do everything that your browser can see and do.

This is also a huge problem with mobile apps, but the problem is at least acknowledged, and there's some degree of permissions and sandboxing, even though it's not completely effective, and even though most apps ask for every single permission anyway. But in general, yes, you should take a similar approach to mobile apps, and only use the minimal set that you absolutely can't live without. Don't install games or stupid shit. We already know that basically every weather app on Android contains malware.

This is also a problem in free-for-all developer library ecosystems like npm, as we keep seeing. Popular dependencies get taken over or sold and then all of a sudden lots of servers are running malware.

Software may be eating the world, but it's really important to know what software you're actually running. You can't just build a house of cards and hope for the best.


i read this as an argument for open source. it's not a guarantee, but it's a good heuristic for an app's trustworthiness


Yeah, I agree. It's not perfect, but it sure helps. This is also why a lot of the big open-source distributors (e.g. Debian) are working towards fully reproducible builds.


Personally I just use two browsers. Chrome with no extensions for work and anything involving sensitive data such as logging into my google account, online banking, Amazon, eBay, etc. For everything else I use an old version of Firefox with proper XUL support so I can be comfortable and retain essential functionality during casual browsing.


Presumably you're using an outdated Firefox ESR version with known exploitable holes then? The oldest supported Firefox ESR (60.0) was the first ESR version to enable Quantum and drop support for old-style Mozilla Add-ons.


Yep, along with Ublock Origin and NoScript in whitelist mode with browser and extension updates disabled. Frankly it would be less hassle to have my identity stolen than operate all day every day without XUL extensions. Granted my attack surface and personal needs are far different from the average Joe and so my method isn't right for most people, but I've already saved so much time and effort that if starting tomorrow I had to spend the next month living off the cash in my wallet while sorting out a case of theft it would STILL have been worth it.

Hell I already saved a whole day of fuckery by being immune to Mozilla's "disable all extensions everywhere by having the certificates expertise of a four year old" screw-up.


Here's an opinion that is likely to be controversial: I like Gnome Shell for exactly the same reason. To be fair, it's something like 7 years since I used it, so maybe things have changed a lot. I ended up abandoning it because I don't like Gnome in general (I want something significantly more light weight). But I loved the idea that I could completely change the way my window manager worked by writing a surprisingly small amount of JS. Not only that, but it had (I hope still has) hooks into mutter, so you could do anything you want to the compositter as well. For example, one of the things I did was to have windows that zoomed the contents when I resized them rather than increasing the size of area in the window -- I did it because I have terrible vision and virtually every time I want a bigger window it's because I want it magnified.

I just noticed Xlambda which was featured very recently on HN: https://news.ycombinator.com/item?id=20316920 I wonder if there is compositor I could talk to as well... compton doesn't do the kinds of things I want to do...


On the contrary: Browser extensions are horribly overrated. They're a massive security problem (the number one place malware is found on a computer) often for the benefit of replacing the word "cloud" with "butt". They are rarely adequately audited or restricted and have far more access to private data than anyone generally realizes.


>> replacing the word "butt" with "butt"

Do you mean to say that extensions do nothing?


Agreed. Those who think extensions are harmless clearly have their heads in the butts.


It's disingenuous to imply that most extensions are just replacing "cloud" with "butt". There are plenty of browser extensions that are necessary and vital for millions of people.

I agree that extensions do need to be adequately audited more, though Firefox does audit every browser extension they offer through their store. Safari similarly does, and Chrome just needs to catch up.


Making your own is still empowering though


Browser extensions are an important part of my small business since there are some things I can't reasonably do without them. https://autoplaylists.simon.codes is a good example: Google Music just doesn't make some metadata available over their OAuth apis.

I understand the broader extension security situation is pretty atrocious, but I like to think there's some small improvement from url-limited extensions like mine (that would otherwise exist as scripts that ask for plaintext credentials).


I gave a highly related talk in 2010 called "Even Software Should Have Screws" at TEDxAmericanRiviera, based on my background working in the iOS jailbreaking community, where I maintained a software ecosystem similar to browser extensions, but for apps and system software.

https://youtu.be/ReKCp9K_Jqw


> personally use Chrome extensions that fill in my passwords, help me read Japanese kanji, simplify the visual design of Gmail, let me highlight and annotate articles, save articles for later reading, play videos at 2x speed, and of course, block ads.

So, autofill, autotranslate, HTML-only mode in gmail, (?), literally just bookmark it, any html5 video player, and of course, block ads (which most browsers seem to be moving to do by default). These are all either offered by the browser by default, or will be (though firefox seems more interested in adblocking than chrome right now).

Obviously they may not do it the same, but as someone who is suggesting addons offer a lot of power, the writer is not actually using most of that power. I kind of agree with browser developers that more often than not, extensions just offer a new vector for malware and no one really understands what power they have so they make bad choices.


Shameless self-promo – I built Refined Twitter Lite which customizes the new Twitter introducing features like Single Column layout.

I primarily built it for myself but maybe some of you folks might find it useful too https://chrome.google.com/webstore/detail/refined-twitter-li...

It is open source of course https://github.com/giuseppeg/refined-twitter-lite


We won an internal hackathon by writing the API and no front end, using a browser extension instead to manifest the client on the third-party page we couldn’t get API access to in time. It sure was easy to write.


We recently launched a browser extension (https://www.getnobias.com/) and have been happy with the use cases we've been able to build around existing websites. Much easier than convincing news publishers to integrate our products directly into their websites. Users get to choose their experience.


I was involved into developing a simple app for WebExtension. Not gonna lie, the experience is awful, for example, no IPC support, being way too complicated in terms of API design and that really hateful manifest schema file which echoes the horror of Android API permission XML. I give up at some point, but I could provide the source code if I could get a chance to recover my files.


well no it indeed had IPC with WebWorkers, however it is also very difficult to work with: data sharing between scripts is a complete clusterfuck. I had found my code, it is written in Vue/Vuex and I will attempt to convert it to using nuxt and upload it github maybe.


here's a shameless link to my small batch of extensions, hopefully some find useful. all open source of course!

https://addons.mozilla.org/en-US/firefox/user/13170802


Anyone with a foot/feet in pentesting/appsec feel that there could be a good omni extension that encompasses cookie editing, header editing, local storage, proxy toggle, and has other potential features?


I love that extensions turn the idea of browser differences into a strength.

It's a reminder that the websites you make don't just go into a black box, they have to co-exist with individual user preferences/needs.


Sometimes popular abandoned browser extensions get bought up by malicious actors that inject malware into your browser without you ever knowing because extensions get automatically updated.


That's why you uncheck "update add-ons automatiocally' in the add-on manager.


Standard users would have no idea they can do that or even bother. Extensions are risky additions to the browser because it's 3rd party code that can read your web pages and local storage values.


Yes, but I didn't talk about standard users.

Also, most of my extensions are not 3rd party code and I want them to have full access.

For the rest I update them once in a while and go check

    ~/.mozilla/**/*.xpi
for changes with something like this.

    find -maxdepth 1 -mindepth 1 -type d ! -name .git -print0 | xargs -0r rm -rf

    for f in ../extensions/*.xpi ; do
            unzip "$f" -d "$(basename "${f%.xpi}")"
    done

    git add -Af .
    git commit -m "Changes"


This risks using a known-vulnerable extension unless you monitor releases another way


Vulnerable extensions are exploited when you access websites that can abuse the holes in the extension (XSS for example). You have to visit the site that the extension targets that has the attack payload for the extension.

I think that risking this by updating manually is more acceptable than getting the mallicious code directly auto-installed as soon as it's released by the attacker no matter what you do.


I would really like a browser which attempted to be extensible the way emacs is. No modern browser appears committed to extensibility as a key feature / differentiator.


You mean like Next[1]?

[1] https://github.com/atlas-engineer/next


Surf[1] is very extensible if you know some C and shell. WebKit-based, for better or worse.

[1]: https://surf.suckless.org/


Check out some kickass WebExtensions by Nodetics: https://nodetics.com


I would say that maybe they are underused or not well known by the general public, but not underrated.


Hacking extensions are underrated, your browser software is unbrowseable.


I haven't cared since XUL died, and I don't imagined I will. New extensions are just webpages.

I can already make webpages.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: