Hacker News new | past | comments | ask | show | jobs | submit login
Firefox 85 cracks down on supercookies (blog.mozilla.org)
1569 points by todsacerdoti on Jan 26, 2021 | hide | past | favorite | 760 comments



"In the case of Firefox’s image cache, a tracker can create a supercookie by “encoding” an identifier for the user in a cached image on one website, and then “retrieving” that identifier on a different website by embedding the same image."

Clever. And so frustrating that optimisations need to be turned off due to bad actors.


Note that the root of all evil here is Javascript being opt-out instead of opt-in (and effectively mandatory for a big chunk of the internet these days).

Letting any website and their friends (and the friends of their friends) run turing complete code on the client PC probably sounded reasonable when the web was created but it seems incredibly naive in hindsight. It's not as bad as ActiveX and other plugins, but it's pretty close.


> it seems incredibly naive in hindsight

Oh stop with the dramatics, please. JS has brought us an immense amount of innovation on the web. It has lowered the barrier of entry to programming and introduced tens of millions of people to the world of development.

If you're on HN the odds are that directly or indirectly, JS is one of the reasons you have a job today, and that you can execute it remotely. And today specifically, it's the only reason many kids can do remote learning as efficiently as they can.

Dude/dudette, I'm not a big fan of JS but let's recognize the good it has brought on the world, instead of complaining in the likes of what amounts to "if only we still lived in caves, we wouldn't have those pesky problems with online advertising" or something.


It can have been incredibly naive and have brought great benefit at the same time. In the 1990s did we foresee that in 2021, tracking our individual web activity and building profiles of our behavior would be a pillar of the business model supporting most of the online economy? Did we foresee that we would be looking to government regulation to prevent comprehensive profiles of our individual web browsing activity from being sold by private companies, and bought not only by other private companies but by government agencies as well? No? Then I think we were, in hindsight, naive. I'm not denying any of the benefits of Javascript by saying that. We did not foresee all of the upside of Javascript, but we were naive about the costs, too.


Not predicting what will happen thirty years hence is naive? The web was all shiny goodness and going to be a panacea for anything you could name back in the late 90s. The fact that it did not come to pass isn’t naïveté so much as a disappointing repeat of what our current economic systems of reward drive companies to do.


The general pattern was predictable - I was involved in making an early ad blocker in 1996. Adbusters magazine was a thing then.

Technically, the detail of super-cookies is inventive and surprising. The general trend for how capitalism inevitably both uses and abuses advertising was predictable.


>JS has brought us an immense amount of innovation on the web. It has lowered the barrier of entry to programming and introduced tens of millions of people to the world of development.

JS didn't lower the barrier to programming at all. On the contrary, programming with VisualBasic and SQL was 10 times more accessible and productive than web development. What enabled millions to write software was the availability of computers in every household.

What the Web did was revolutionise software distribution. After making a change to my 1990s style VB program I had to pack a stack of floppy disks and travel to my customer to install the new version on their PCs, migrate the data, make sure everything still worked in spite of other programs installing an overlapping set of DLLs, etc.

With the Web, we took a massive hit in terms of developer productivity and complexity, but the distribution model trumped absolutely everything. It also made things possible that would have been completely unthinkable, such as running software made by a large number of developers you don't know and don't necessarily trust with all your data.

To this day, the Web is the only reasonably secure runtime environment that isn't centrally controlled by some gatekeeper with its own agenda.

So I actually agree with most of what you have said elsewhere in this thread. But I disagree about lowering the barrier for developers.


>JS didn't lower the barrier to programming at all. >But I disagree about lowering the barrier for developers.

I think OP meant it as browsers that run JS. The built in console is so useful for testing/practice. You can run JS without any setup on most popular OS - Win/Linux/Mac.

You can practice JS in console while viewing YouTube tutorial or following a JS blog/ Mozilla dev page.


I know that's what they meant, but I disagree that it lowered the barrier to creating actual applications or to getting into programming.

Everyone who had MS Office installed back in the 1990s (which was basically everyone) could easily run some quick VB code to try things.

But the difference was that you could also create proper applications with a UI, a database and (optionally!) some glue code.

We just had no good way of distributing our apps. Creating anything collaborative that wasn't restricted to a local network was exceedingly difficult as well.

The Web fixed all that, albeit at the cost of cratering developer productivity, a massive increase in complexity and higher barriers to entry for new devs.

And if you're asking me why the number of developers exploded while the barriers to entry supposedly went up, my answer is that the new opportunities that came with unrestricted worldwide distribution of software trumped the narrower issue of writing that software in the first place.


>Everyone who had MS Office installed back in the 1990s (which was basically everyone) could easily run some quick VB code to try things.

Can't be everyone, how do you know the numbers? How many Office users actually used VB? MS famously stifled competition in internet browsers. 1990s numbers still won't give a fair picture.

>The Web fixed all that, albeit at the cost of cratering developer productivity, a massive increase in complexity and higher barriers to entry for new devs.

Why blame the web for higher barrier to entry. JS is doing fine. See a few surveys for popular languages:  https://insights.stackoverflow.com/survey/2020#most-popular-...

https://madnight.github.io/githut/


I agree with simias.

What does JS have to do with my VPN? What was wrong with Skype? Still beats the pants off others for quality.

I cannot think of much good agressive whitespace, hamburger menus, infinite scrolling, HID hijacking, copy-paste preventing, trackers etc, etc etc, has brought us, besides into the world of Aggressive Ad Arbitrage.

Need https://motherfuckingwebsite.com/ be mentioned?

The real powerhouse was Flash, then HTML5, and now WebAssembly, or as I like to call it, reinventing the wheel while simultaneously and conveniently blocking ad-blockers.


> WebAssembly, or as I like to call it, reinventing the wheel while simultaneously and conveniently blocking ad-blockers.

This meme needs to die.

WebAssembly doesn't prevent ad blocking at all. Ad blocking relies on blocking network requests for the most part, which can still absolutely be blocked when done by WASM.

Some ad blockers also optionally relies on removing DOM nodes for greater coverage. It's important to note that this technique is used to reduce visual clutter, but doesn't prevent tracking, and doesn't increase browsing performance as the ad still gets downloaded before being removed.

Yes, a purely canvas-based app could work around DOM blocking, but WASM has nothing to do with it: you can do pure canvas-based UIs in javascript too, that's what all the modern web games do.


> I cannot think of much good...[JS] has brought us...

This has shades of "What have the Romans ever done for us".

https://www.youtube.com/watch?v=djZkTnJnLR0


Still waiting for the sanitation, aqueducts, roads, peace, etc...


So .. because you don't like design trends, you're saying JS is useless?

That's like saying wool is useless because you don't like modern fashion.


It turns out: securing on network services is a PITA that school network admins don't need more of; developing good ones is expensive and is best outsourced to a SaaS; on network/on device resources need more extensive vetting than web resources; on network/device resources take days to try out and roll out with cooperation of the understaffed IT team, web resources take one class and a link.

Skype is superceeded by a half dozen more targetted classroom video chats - none of which need installing or require the kids to have accounts.

From the perspective of more easily deployable apps that are more useful to end users, WebAsm > HTML5 > Flash. The fact the ad monopolies run our major web developments is convenient for them to piggyback these changes, but isn't the drive for them.


I like that the website you linked still has Google Analytics JS embedded.


It's states what it gets right: it's lightweight, accessible, compatible, responsive, legible and it works.


F12 it ;)


You're blaming the language for how people use it. It's like saying C is bad because people use it to write root kits.


C is for writing trusted native code. JS is for writing untrusted code in a sandbox that is way too leaky.


946ms for me. has google analytics


> It has lowered the barrier of entry to programming and introduced tens of millions of people to the world of development.

I think Basic, Pascal or Python achieved more at that.

> it's the only reason many kids can do remote learning as efficiently as they can.

The main reason is internets and TCP/IP, that’s essential and irreplaceable. Another important reason is h.264, equivalents do exist, but given the state of hardware acceleration it’s irreplaceable, at least not on mobile devices.

JavaScript ain’t an essential tech, as it’s replaceable on both clients and servers. Your kids would learn equally efficiently using a native app instead of these JS-rich web sites.


I've worked with python 17 years. I've taught python, javascript and typescript. There is a universe between how accessible JS is vs Python.

And TCPIP may have helped but just because it's essential part of the stack doesn't mean the remote learning could have happened without JS existing (in the time it did). The web would be a glorified FTP server if some people here had their way.


As a Python programmer myself I’m curious, what’s the difference?


Your language interpreter comes built-in with your computer


So "Download pycharm" is enough of a barrier that it creates a "universe of difference"? I find that hard to believe.


Yes, it is. You’re missing the step beforehand which is “know that you can download PyCharm.”


That's an entirely different question to whether it should be on by default for random sites, which is what GP is actually talking about. I would go farther and say that there was never a time that it looked reasonable.


Don't you think JS being so available is part of what made it so popular, and thus what made it accessible, and what brought programming to millions who wouldn't have done it otherwise?


I think that asking users "do you want to let this site run JavaScript?" would have been an acceptable trade-off in availability.


Look up alert fatigue.


I've heard of it, actually. I don't think that's a good reason to enable JavaScript by default any more than it's a good reason to give webpages access to my camera, microphone, and filesystem by default.


Good things are happening in spite of JS, not because of it. It's a silly language, the type system is atrocious, and the entire ecosystem is a joke (look at the left-pad incident).


Typescript has an excellent type system and one of the best development environments the entire dev world can use. You know what ecosystem it's a part of? The JS one. That "entire" one you're talking about has produced superb work such as React, V8, is pretty much responsible for Rust existing, and so on.

JS has dumb flaws. It doesn't mean anything is happening "in spite" of it. If anything, innovation is happening thanks to JS AND in spite of its flaws. But don't think for a second we would have even one tenth of the developers we have at our disposal today if JS and the internet didn't massively lower that barrier of entry. And less developers means less progress overall in the entire industry, not just less left-hand libraries.


> Typescript has an excellent type system

It has a type system. "Excellent" feels a bit strong.

https://blog.asana.com/2020/01/typescript-quirks/

https://www.executeprogram.com/courses/typescript/lessons/ty...


This feels like complaining about an "excellent" game because you found a couple of bugs in ...

That second article is legitimately cool though.


The criticism is about an executable/scripting code platform in the browser as an opt-out - nothing specific about the semantics of JS.


JS brought both problems and benefits. The fact that it brought benefits doesn't prove that it didn't bring problems.

I think we could have done better if we knew what we were doing.


This is just such a weird centrist point to make. Is there any substance to what you're saying other than "there's good and bad in everything"?


So did using computers in single user mode, but we wouldn't dare do that today. JS is, arguably, over-reaching and way too powerful for the web to be turned on by default.

The sad part is we're probably just one decent privacy bill away from making almost all of this go away. JS + an anti-regulatory political climate is the larger problem. The EU has tackled this heads on recently with its privacy laws. At a certain point, technical work-arounds just don't work and the bad commercial actors will always win unless there's regulation stop them.


> It has lowered the barrier of entry to programming and introduced tens of millions of people to the world of development.

And wasted zillions of man hours to relearn the latest web framework every 2 years. Good job JS. We all love you.


There were a variety of scripting languages.

You don't acknowledge the need to sandbox code, regardless of language.


Applications can exist outside of the web too.


Yes but if it was „incredibly naive“ to download and run JavaScript in the browser sandbox how naive would it have been to download and run native code outside of any sandbox?

The browser runtime is what enabled us to use software provided by a huge number of developers of varying aptitude and motivation without putting in place some centralised gatekeeper with its own vested interests.


> Yes but if it was „incredibly naive“ to download and run JavaScript in the browser sandbox how naive would it have been to download and run native code outside of any sandbox?

It would be naive to download and run native code for every website you visiti, yes. A few that you trust and where you think that is warranted is a different matter.

Running javascript in a sandbox provides the illusion of safety so it gets enabled by default while still creating tons of problems.


> A few that you trust and where you think that is warranted is a different matter.

We tried that, and the security issues it caused were orders of magnitude more severe than any of the problems caused by defects in an up-to-date browser sandbox. Basically all consumer PCs used to be infested with viruses all the time. It sparked an entire virus scanning industry.

It takes far too much discipline and diligence to make sure that you can trust the motivations and security capabilties of all your software providers. Sandboxing is good. It's the only thing short of the most heavy handed, restrictive and centralised control that has ever worked.

The security issues we have on today's Web are overwhelmingly unrelated to client-side security. The problem is protecting the data that is stored on servers and the incentives created by ad based business models. All of that is equally problematic regardless of whether you run native code or sandboxed JavaScript.


That's not my point. You wouldn't have had the development speed we've had the last 30 years. You wouldn't have had the same amount of devs. The same pickup on the web. The same worldwide connectivity.

Javascript is one of the, if not the, most influential technologies of the past 100 years. It changed the course of history. Can you say the same of, like, wxWidgets or whatever UI toolkit you'd be using for your native app?


>Javascript is one of the, if not the, most influential technologies of the past 100 years.

That's... one hell of a claim.


JS is indeed what brought us all the speed in software development. People could prototype their ideas crazy fast, and they still can. Scripting in browser had an immense positive impact on software industry, and that DID come with many negatives, but you always have the option to completely turn it off if those bother you.

A sandboxed environment was a huge idea and the browser has been the primary example of how great it can be. The app model in mobile with permission isolated access is more or less the proprietary re-implementations of the browser sandbox.


Prototyping fast does not guarantee shipping robust software fast, which is in part why the JS ecosystem has been having problems.

To apply the common construction analogy to JavaScript: no one would call a quick sketch on a napkin a valid blueprint for a building.


> (...) JS ecosystem has been having problems

what problems? Yeah, stuff like left-pad happened with NPM, but that has nothing to do with sandboxed scripting in browser. Also within my already-depressingly-long career, I had more problems with deploying "robust" .NET desktop apps (WPF & Forms) and even Qt based supposedly "cross-platform" apps than web apps. Web is the most robust platform I've ever worked with, and that's by a huge margin.

JS as a language has problems[1] but the idea behind it proved itself to be great.

[1]: and with the latest additions it's one of the better programming languages to work with, although the standard library sucks (or more like, nearly non-existent). But that's totally another topic.


I 100% stand behind it. It rivals everything I can think of and then some (the internet itself included).


No no no. The problem isn't JavaScript or web capabilities here. It's the companies and people who use them in evil ways. I would rather handle that even if it's much much harder.


I don't think it's either/or. Yeah we need to act against companies abusing it, but we also need to be prudent and put locks on our houses when we know there are thieves and spies who would love to sneak inside and take notes on our every move.

What frustrates me the most is that we can't individually disable web api's that provide no value to us. Yeah, that would give greater entropy to fingerprinting, but I'm willing to take that tradeoff if I could prevent webrtc, motion sensing, screen size detection, or web assembly e.g. except on selected whitelisted websites.


Doesn't the same argument apply to basically every security and privacy feature?

"No no no. The problem isn't unencrypted network connections, it's companies and people who use them in evil ways. I would rather handle that even if it's much harder."

Should we not have introduced HTTPS? Permission models on modern operating systems? 2-factor authentication?

What about Javascript makes it different to the problems solved by these other features?


Yea... You need either a law/sanctions, or a technical restriction that can't be circumvented.

Hopefully both, someday :)


And how would you address this problem?


I browse in firefox with javascript turned off, in ublock, with a bunch of other restrictions [0], and temporary containers. I make exceptions for a couple dozen sites, like my bank, open street maps, etc. Youtube is my only soft spot here, the rest of google I keep blocked. I can make one-off exceptions to read a tab in front of me, but that's not routine, it's not hard to find sites that support this.

[0] https://github.com/pyllyukko/user.js


> Youtube is my only soft spot here

http://youtube-dl.org/


How do you use the internet with JS turned off? Every time I try doing this, I undo it five seconds later because of so many sites breaking instantly. (I sometimes browse the web with w3m; sites blocking you because of no JS happens often)


Many sites break instantly. If I care enough about those sites, I whitelist them. Otherwise I move on.

On the other hand, many sites work better with JS disabled: they load much faster, don't slow down scrolling, don't break copy & paste, and so on.

edit: I should probably make clear that I'm not the same user you responded to


I can usually find alternatives that works without javascript. Or I find sources outside of the big tracking companies, like openstreetmaps instead of google maps, etc.

I'm not a promiscuous browser, if I click on an interesting link and I get an empty page because it insists on javascript I close that tab and figure I saved myself from wasting my time on a crappy ad-tracking-infested website, so many of which are lame anyways.

That being said I do sometimes enable javascript temporarily on sites when I am desperately looking for some specific bit of info, then I reset back to my default-off when finished.

I've been doing this for a couple/few years, and it doesn't seem like a big deal to me. I'm happy to have that reminder that a site is irritating me insisting on javascript for no good reason, more often I go elsewhere and that suits me fine. It saves me from having to worry about so many nefarious things that go on in this space.

Why not give it a try? Ublock lets you default disable all javascript and enable it on the page you are looking at with a couple clicks, and edit/revert your list of rules whenever you want. You might be pleasantly surprised at how few times you need to fiddle with it after you set it up for your important sites.


Regulation seems appropriate.


Regulation as a solution for problems on the Internet is pretty stupid because jurisdictions are so diverse.


But the reason you're not brute forcing passwords on your bank's e-banking system isn't because you can't or because they are super secure, it's because if you do you'll probably go to jail.

Also, good regulation is important to keep the big dogs on a leash and to have something to go after them when they don't behave.

Technological solutions will of course come, but you also need regulation, especially until the technological solutions come.


> But the reason you're not brute forcing passwords on your bank's e-banking system isn't because you can't or because they are super secure, it's because if you do you'll probably go to jail.

No, this is really out of touch with how crime online works. People that want to hammer a bank rent botnets, use tor, vpns, etc.

Banks get absolutely zero protection from the law in that regard. It’s “illegal” but completely unenforceable to the point of being useless.


I’m confused by your response given that regulations are a _standard_ solution for problems on the internet.


So different website features per country? Or do you mean regulation decides how a browser implements it? Either way I don't see how that would ever work.


You are aware there are country-specific (or even more local) regulations covering companies today right? In fact essentially all regulations are. So why are you acting like my proposition is somehow unprecedented?


Because this is about websites, not companies. I'm not a company. Do my website use a different regulated subset of JavaScript than yours? If it is a company do they follow local rules or local for the hosting company? What about sites that incorporate sources from different locations? Do JavaScript library developers now have to create a version of their script for each countries regulation? Then the US regulation will decide the rules for all of the worlds JavaScript development and deployment. No, which technologies that can be used on a website doesn't belong at regulators but developers. What can and cannot be tracked and collected belongs at regulators.


I'm kind of confused here. Websites have to do everything you're saying today already. There are already different rules regarding data privacy, protected speech, etc. in different countries. How you choose to fulfill those obligations is up to you. But yes if your point is that it requires work to follow various different regulations, then of course you're correct. In fact, that's the whole point. The regulations are there to change your behavior in the market in question. We've had regulations across varying markets since pretty much time immemorial.


My point is that regulation of web standards and browser features isn't possible - regulation of data collected is. If you try to regulate the technology (supercookie, java Script, etc.) it isn't the collectee that gets in trouble with the law but the developers of the browsers, libraries and websites like Mozilla, jQuery and "me". Regulating browser features is like regulating the maximum speed of cars. The ones punished for a "too fast Lamborghini" would then be the factory, not the owner driving 40 miles an hour. In other words if the US says a Lamborghini must have a maximum speed below 50mph, the EU says 80mph and a tiny island somewhere says 300mph, where do you think Lamborghini will incorporate the part of their factory that gets their cars certified? Then we have a run to the bottom like with tax evasion and the small developer gets strangled. Regulation doesn't belong on things like supercookies but on what you collect with them.


Isn’t it obvious? Most companies subject to regulations are physically located somewhere. It’s much harder to enforce regulations against companies that operate in every global jurisdiction at once.


There are like 3 companies that do the vast majority of advertising. They have operations in countries they do business with. Do you think they’re hard to track down? Web tracking has consolidated over time hence enforcement is _easier_.


"Regulation" mandated stupid cookie consent overlays on every damn European website. Thanks, regulation.


There is no regulation mandating cookie consent overlays - websites are free to not abuse cookies to track their users.


> There is no regulation mandating cookie consent overlays

Yes there is. A mugging is still a mugging even if the victim is "free" to give their wallet.

> websites are free to not abuse cookies to track their users.

You're free to not use such websites.

I dislike tracking and avoid being tracked myself. But this absolutely is a regulation and it's immensely disingenuous to pretend otherwise.


Nope, only on those that intend to abuse your personal information.


This is false, albeit depending on how broadly you define “personal information”. Regardless, the effect of the law has been clear.


Yep, the regulation applies to all websites that use cookies.


Does the fact that some regulations are ineffective imply that all are?


Yes yes yes. Luckily for us, these problems are technically solvable, no handling (?) "evil ways" (?) needed. The latter proposal is both ill-defined and a waste of time and resources. Better to spend those resources on designing more secure systems.


It sounds like they're talking about ETags [1] here. I don't think JavaScript has anything to do with it.

[1] https://en.wikipedia.org/wiki/HTTP_ETag#Typical_usage


ETags and Last-Modified headers can be used for long-term user tagging, but without Javascript they provide a lot less value in terms of tracking.

Suppose, that you are visiting a web site with Evil Embedding (an iframe tag or script, that loads Evil Resource on behalf of advertiser). If your browser requests Evil Resource without telling advertiser the name of top-level site, the advertiser gets little. They get to know, that user 9062342154 is online and they are asking for Evil Resource X, but that's all. They can't even tell, which specific website is being visited!

The real problems start, when the top level web site cooperates with advertiser by running a Javascript "bridge", that acts both as an arbiter and a communication channel for siphoning your information. In addition to transferring information, the bridge acts as anti-fraud measure to confirm, that there is no foul play on the part of web site operator. Since the script is Turing-complete and can be updated anytime, there is no way to restrict it's actions.


> If your browser requests Evil Resource without telling advertiser the name of top-level site, the advertiser gets little.

But the advertiser will get the referrer so will know the domain name?

(And if they didn't, they could require the site operator to include the site name in the resource URL.)


Ooh, that's interesting. The way it was described made me think that you needed some JS to check if the data was in cache or not.

I guess it's probably a bad idea to let the browser send this type of potentially unique info to the server by default, but I understand how it makes sense from a performance perspective.

As far as I'm concerned privacy should always trump performance, but I realize that not everybody shares this point of view.


Somewhat off topic but have you see all of the recent (2 years) malware using webassembly? It's difficult to disable in chrome, somewhat difficult to disable in firefox, and no extensions seem to help. I'd love make it as easy to disable as JS.


Webassembly is about to become as necessary for browsing the modern web as Javascript is today. For much of the same reasons: you must keep it on to be tracked, so it will be "made necessary" on as much of the commercial web as possible.


No surprise WebAssembly is disabled in Chrome on OpenBSD.


> Note that the root of all evil here is Javascript

Not exactly.

The root of all evil here is HTTP. You don’t need any JavaScript to plant cookies or other tracking assets. As a proof this technique is used to track users in email, such as embedding a 1 pixel image in an email retrieved via HTTP.


Which is why I turn off HTML email by default, and even when I turn it on for one message images are not loaded except by a second step. I rarely get to the first step and even less often the second.


HTML should be an opt-in too! :)


It didn't sound reasonable when JavaScript was introduced, and it didn't sound reasonable with WASM, yet here we are.


This is what tempers my enthusiasm for WebAssembly - it will undoubtedly be used for all sorts of user-hostile and malicious activity.


...and with zero accountability. Any website can push 10MB of obfuscated, debugger-resistant, code on your computer.

Obfuscated to resist ad-blockers and anti-tracking features.


asm.js was a thing before WebAssembly, with the same downsides, except maybe a little less browser support for debugging, inspecting, etc. asm.js was able to exist without browser support in Chrome, so WebAssembly support doesn't really bring anything new to the table.

At least that's how I understand it, care to give ideas why it's not a positive change? :)


> care to give ideas why it's not a positive change?

What I wrote above: it's going to encourages 10x more bloated and obfuscated websites.

Apart from the security and privacy issues, it will make the web even less accessible to users with slower Internet connections (some 2bln people) and visually-impaired users.


WASM is fast enough that you can internally sell it as "speed improvement" to keep your developers happy and who else might question your intentions. asm.JS tends to be slower than direct JS.


> Letting any website and their friends (and the friends of their friends) run turing complete code on the client PC probably sounded reasonable when the web was created

No, it was already very unreasonable, especially as the browsers of the time had even worse sandboxing than now.


Couldn't this particular trick be done purely server-side?


This is why I whitelist Javscript on a per-subdomain basis. I think NoScript does this but I use a simpler extension[0] and then rely on uBlock origin and multi account containers for privacy once I've whitelisted sites.

[0] https://addons.mozilla.org/en-US/firefox/addon/disable-javas...


In case you are not aware, uBO also allows to wholly disable/enable JS on a per-site basis.[1]

---

[1] https://github.com/gorhill/uBlock/wiki/Per-site-switches#no-...


Ah, brilliant! Thanks for pointing that out to me (and for uBO).


Well, I agree with you, but to be honest it's not Javascript, it's Web APIs gave a lot opportunities to surveillance capitalists.

But even if you turn off Javascript, you are not invisible or untraceable, you are just getting a little harder to be tracked. I don't wanna name anyone, but there are tons of famous websites that tracks by including <img> tags and referencing pixel trackers.

I usually browse with JS turned off, it's blazingly fast, and, well most websites work without any significant drawbacks.


Firefox explicitly talks about abusing IMAGE caches and you’re waffling about JS. The two things are not related here so stop trying to conflate them.


The image caches offer no means to identify individual users unless you're able to re-parse the client side (cached) data. Guess what component is essential to perform that client-side parsing?


So tracking pixels in email clients aren’t a thing? Trust me, people will find ways around it without JS. Reinventing Flash, for a start.


Yes, tracking pixels aren't a thing (anymore).

Gmail no longer loads them. Thunderbird no longer loads them. Most providers, offering online email, have already caught on.

> Trust me, people will find ways around it without JS. Reinventing Flash, for a start.

There is no need for that. Why bother, when you can write a mobile app in proper computer language and eliminate the middleman (browser)?


The fact Flash existed disproves your hypothesis.


Turing completeness isn't the problem. If Javascript was one of those non-Turing-complete languages that don't have loops, etc. then it could still be used to track people. I think you're using that term to make your opinion sound powerful, but the power comes from what it has access to.


> Clever. And so frustrating that optimisations need to be turned off due to bad actors.

Definitely this. Reminds me of the whole Spectre/Meltdown debacle.


As a fellow engineer, clever! As a user, damn you!


I'm curious how bad disabling this caching feature would be. Specifically, how often do you load the same image on two different domains?


Instead of thinking "same image on different domains, think "hidden uniquely-named single pixel image".


That’s the same thing. In order for that tracking method to work, this uniquely named pixel has to be loaded while visiting multiple sites. So it ends up being multiple domains referencing the same image from some tracker resource.


So a Facebook like button? Or a Twitter logo link?

I suspect you'll find this is _way_ more common than you expect.

(Also, if you thing Goog aren't doing this with their font CDN or various javascript library coaches, even (or especially) for sites without Google Analytics... I've got a bridge to sell you...)


It would be easy to check that...


It answers this question in the article; negligible performance hit


The most common example I could think of (other than trackers) would be aggregator sites. If the aggregator shows an image that was originally from a destination article or if comments link to a source for some content.


Good question. I'd guess that the chance of that happening is very small. But if that optimization exists maybe it's not that uncommon?


It seems to me that the only actual solution is just to make tracking people illegal.


The only problem with that is the law-makers themselves would kill to have such an excellent tracking apparatus.

The likes of Amazon, Google, and Facebook are the envy of the Government, if anything.


That's what the GDPR did: any and all tracking is illegal, unless explicitly allowed by the user, for a specific purpose, with a specific list of third parties receiving the data, with the default always being off, and every single purpose and third party has to be explicitly clicked to be allowed. And the user has to have no repercussions from saying no. And no has to be the default. And the dialog has to be able to be ignored (in which case everything has to be denied).

Of course publishers don’t follow it and do the exact opposite: default allow everything, click every single one to say no.


I was talking about tracking being illegal. I said nothing about the user being able to allow it....


There are also seemingly companies, who just blatantly ignore such regulations.. https://brave.com/google-gdpr-workaround/


In Javascript how are they able to retrieve something from the cache? Local, session, and cookies are domain locked.


They load the image URL and observe the loading time. If it's fetched quickly, they know it was from cache. The server (controlled by the advertisers) can intentionally add delay to those image requests that makes detection reliable.


I don't see how that helps you persist a tracking ID.

If you generate a random URL, you'll always get a cache miss.

If you use a static URL, you'll know if you have a new session or not, but that doesn't tell you what the tracking ID was.

The only thing I can imagine is the server serve several images /byte1.png /byte2.png etc. and make them all X by 1 pixels, encoding a random value in the dimensions, assuming that's available to Javascript.

But if you encode the tracking ID in the image somehow, you don't care much whether it was cached or not, it's inherently persistent. It'd mainly be useful if you're trying to reconstruct a super cookie.


> If you use a static URL, you'll know if you have a new session or not, but that doesn't tell you what the tracking ID was.

As Mozilla have said:

> "In the case of Firefox’s image cache, a tracker can create a supercookie by “encoding” an identifier for the user in a cached image on one website, and then “retrieving” that identifier on a different website by embedding the same image."

The identifier is encoded into the image itself on a fresh fetch of the static URL, which can then be extracted by JS (which can access pixel data, and their RGBA channel values).

When a cache-hit is detected, you know you have an identifier that correlates to user history.


Assuming js can retrieve pixel data, you could have the server generate unique images and use the rgb values as a unique ID. The unique image would be cached.


You don't need to worry about whether the image is in the cache or not.

If you have to hit the server on that static URL, you write a request handler that will always give you back a new image with a new ID encoded in the pixels. Think of it like dynamic page generation on the server side, but for an image instead. Every time you hit the same URL you get a different image.

On the client you can decode that ID and use it throughout your code, in network requests, etc., to track user activity.

If the image is already cached you just decode the ID and use it as described above. All the browser cares about is associating a URL with a resource: it doesn't know or care that the resource in question changes every time it's asked for.

Also, the client code literally doesn't need to care whether the ID is from an image in cache or an image returned from the server.

The server can simply tie all activity for a given ID together on the back end.

This is one way of doing it: there are probably others. I'm certainly no expert.


With some forms of caching it's much simpler: the browser sends an ETag or If-Modified-Since and the server is supposed to return 304 Not Modified to optimize the load if the cached resource is still valid.


But from JavaScript I don’t think you can see that. You just get the end result of the image being served to you. You have to infer it from timing.


https://megous.com/dl/tmp/705dc9a2477d1f95.png

For cross-origin, you'd add CORS.


I think you can just load it in a canvas so long as the image has the appropriate cross origin header [1]. So the entire attack would look like

1. On site A, make a request to eviltracker (e.g. load an image); eviltracker returns an image encoding some unique identifier. Maybe the image request contained some cookie data which the server includes as part of the image.

2. On site B, make another request to evil tracker with the same URL. Browser helpfully notices that the image has been cached, and so site B can access the information contained within that information. In such a manner, information has been transferred from site A to B. You could theoretically repeat this process again: make another non-caching request to eviltracker (maybe with some cookie set to the combined A+B info)

[1] https://developer.mozilla.org/en-US/docs/Web/HTML/CORS_enabl...


I think that they put the user information in the image using something like this[1].

[1]. https://github.com/subc/steganography


Likely, they put identifier with steganography

When image loaded, js read the identifier, if the image loaded from cache, the identifer still same

Edit: typo


Fetch/XHR as long as the tracker server have CORS * enabled, right? Then they can just inspect the blob data.


Only on the first visit. The first visit is impossible to optimize anyway due to bloated javascript, so webdev focuses on optimizing only repeated visits. What images do you want to cache between sites? Advertisements?


Bad actors is how we discover these vulnerabilities in the first place!


Ads are frustrating but this can lead to even more irrelevant and therefore more frustrating ads.


Some of us think being tricked into purchasing things we would easily like but don’t need is worse.


Ads were fine when they used page content as the context.

"Personalized" ads are partly a scam to devalue publishers: set a tracking cookie when a user is on an expensive-to-advertise-on website, and then serve ads to the very same user when they visit cheap sites. I'm dumbfounded why reputable publishers put up with this.


I would rather see irrelevant ads because then I am less likely to purchase something online


Per-site caching negates the principal selling point of centrally-hosted JS and resources, including fonts. The convenience remains, but all speed-related perks (due to the resources being reused from earlier visits to unrelated sites) are no more... which is actually great, because it reduces the value that unscrupulous free CDN providers can derive from their "properties".

It also means that I can remove fonts.google.com from the uBlock blacklist. Yay.


That idea of having JS files hosted elsewhere always struck me as a Girardian scam (e.g. "everybody else does it") and always getting voted down when I showed people the reality factor.

Nobody seemed to think it was hard to host a file before this came along, just as nobody thought it was hard to have a blog before Medium.

Of course this creates the apocalyptic possibility that one of these servers could get hacked (later addressed with some signing) but it's also not easy to say you're really improving the performance of something if there is any possibility you'll need to do an additional DNS lookup -- one of the greatest "long tails" in performance. You might improve median performance, but people don't 'experience' median performance in most cases (it goes by too fast for them to actually experience it), they 'experience' the 5% of requests that are the 95% worst, and if they make 100 requests to do a task, 5 of them will go bad.

People are miseducated to think caching is always a slam dunk and sometimes it is but often it is more nuanced, something you see in CPU design where you might "build the best system you can that doesn't cache" (and doesn't have the complexity, power and transistor count from the cache -- like Atmel AVR8) to quite a bit of tradeoff when it comes to 'computing power' vs 'electrical power' and also multiple cores that see a consistent or not view of memory.


> Nobody seemed to think it was hard to host a file before this came along, just as nobody thought it was hard to have a blog before Medium.

Huh? Who ever said the main point of a CDN is to make things easier? It's always been in order to provide a faster end-user experience.

> ...but it's also not easy to say you're really improving the performance of something if there is any possibility you'll need to do an additional DNS lookup -- one of the greatest "long tails" in performance.

But common CDN's will virtually already have their IP address cached while you're browsing anyways.

Caching certainly has nuance to it as you say, but I think you're being particularly ungenerous in claiming that CDN's are a scam and that you're representing "reality".

Businesses measure these things in reality with analytics, and they also almost always analyze the worst 5% or 1% of requests as well, not just the "median".

CDN's are a big boost to performance in many cases. Or at least, until now (for shared files). You shouldn't be so dismissive.


This. If you are loading some scripts that are actually required for your app or page to work right, why would you get them from someone else's infrastructure? Terminal laziness? Or is the assumption that XYZ corp has more incentive than you do to keep your page working? This never made much sense to me except for developer toys & tutorials.


It makes sense from a $$ and resource usage stand point. I have to assume the best here and believe that the people arguing on there being no merit to CDN hosting of shared libraries all forgetting the two most important things a business must consider.

Every byte sent will cost the business. If you can save that 2MB per user per cache life, you pay that much less on the internet bill for your hosting.

Every byte sent uses up some of your limited bandwidth while it is being sent. If your site is 10KB and you rely on 2MB of javascript libraries and fonts, offloading that 2MB is quite a significant reduction is resource usage.

These above two views seem vastly more important than terminal laziness, up-time management, etc.


> If you can save that 2MB per user per cache life, you pay that much less on the internet bill for your hosting

Ah yes - I remember those _dark days_ of being a shared webhosting customer over a decade ago and stressing about breaking my 500GB/mo data transfer limit.

Today, Azure's outbound data costs on the order of $0.08/GB, so 1MB is $0.000078125, so the cost of 2MB of JS is $0.00015625.

Supposing you have one million new visitors every month (i.e. nothing's cached at their end so they'll download the full 2MB) - those one million visitors will cost you $156.25 in data-transfer.

Compare that to the immediate cost to the business of paying their SWEs and SREs to trim down the site's resources to a more realistic few-hundred-KB, supposing that's a good 2-3 week project for 3-4 people - assuming a W/Best Coast company, that's ( $250k / 52 ) * 3 * 4 == $57,000.

From looking at those numbers, there is absolutely no business case in optimizing web content - it's now significantly cheaper (on paper) to have a crappy UX and slow load-times than it is to fix it.


> If your site is 10KB and you rely on 2MB of javascript libraries and fonts,

... then you're doin' it wrong.


> ... then you're doin' it wrong.

100%. Its mind blowing that this could possibly be considered without batting an eyelid.


Browsers can check "subresource integrity" to guard against hacks of third-party services.

https://developer.mozilla.org/en-US/docs/Web/Security/Subres...


Yeah but you, the developer, need to provide the hash of the script being downloaded, work that’s easy to miss.


In case the hash is provided, caching across websites could be turned on again, to avoid the possibility of it being used as a supercookie. The website would have to know the hash of the supercookie before loading the resource. Except if a candidate set of possible users can reasonably be downloaded to the client, maybe sorted by time of last access and proximity to the geo location of the IP address of the last access...


Yes. Pretty much.


It's not about being hard, it's about being convenient. Convenience is important. Even trivial convenience.


Wasn't that also because before http2 browsers were limiting the number of concurrent requests to a domain?


We at some point, we where retrieving some javascript (jQuery) and fonts dependencies from a CDN. At the end, we need that change and include all in out web application as we saw all kind of problems... Like that someone trying to use behind a intranet or not working fine when it's accessed a develop version behind a weird VPN. Or that funny moment when a CDN get down and simultaneously a lot of our webs get half broken.


It doesn't seem like centrally-hosted resources ever centralized enough to be all that useful. Even for sites that are trying to play ball, there are multiple CDNs so everyone has to agree which one is the standard. Plus everyone has to be using the same version of each resource, but in practice most js tools release so often that there will always be many different versions out in the wild.

On top of that a lot of the modern frontend tools and best practices are pushing in the other direction. Out of the box, tools like webpack will bundle up all your dependecies with your app code. The lack of JS namespacing and desire to avoid globals (which is pretty well-intentioned, and generally good advice) means that your linter complains when you just drop in a script tag to pull a library from a cdn instead of using an es6 import and letting your bundler handle it. Typescript won't work out of the box I don't think. Your integration tests will fail if the cdn is down or you have a network hiccup, as opposed to serving files locally in your test suite. And on and on. This is just anecdotal, but I haven't seen most teams I've worked with value the idea of centrally-hosted JS enough to work around all these obstacles.


> Per-site caching negates the principal selling point of centrally-hosted JS and resources

It doesn't or more correctly the benefit wasn't really a think in most cases.

I will not start the discussion her again but on previous hacker news articles about this topic you will find very extensive discussions about how in practice the caches often didn't work out well for all kinds of reasons and how you still have a per-domain cache so it anyway mainly matters the first time you visit a domain but not later times and how the JS ecosystem is super fragmented even if it's about the same library etc. etc.

> cause it reduces the value that unscrupulous free CDN providers can derive from their "properties".

Not really, the value of a CDN is to serve content to the user from a "close by" node in a reliable way allowing you to focus on the non static parts of your site (wrt. to traffic balancing and similar).

Shared caches technically never did matter that much wrt. CDN's (but people used it IMHO wrongly as selling point).


Plus, FF can preserve the value by allowing cross domain resource caching as long as the request specifies a hash.

That negates the super cookie use case, but still lets you eg. load Jquery from a shared CDN.

You get a free security upgrade to go with it.


> Plus, FF can preserve the value by allowing cross domain resource caching as long as the request specifies a hash.

This does not work as you still can have the same timing attacks the hash only helps wrt. source integrity from CDN's but not with chach based time attacks.

Still what should be possible without timing attack channels is to de-duplicate the storage of resources (through not easy and likely not worth it for most use-cases). So you will only lose the most times small speed post on the load time when you open a domain the first time.


> It also means that I can remove fonts.google.com from the uBlock blacklist. Yay.

If you are downloading fonts from Google, Google harvests your IP and likely the referring site from the request. Even if your browser doesn't sent the referrer, many sites have a unique enough font-fingerprint that Google can figure out where you are.


Would this bypass most Google Font tracking? An extension that makes Google Font requests on behalf of the requesting site and caches the response indefinitely and for all sites, somewhat like LocalCDN/Decentraleyes, but on demand.


That would prevent Google from getting every request.

FWIW, I'm not sure how much of an issue this even is. My comment was a hypothetical. Sadly the way Google/ Facebook/ etc operate, I just assume that whatever I think of, they've already done it plus 1000s of other things which would never occur to me.


Per-site caching is the new norm. Shared caches are vulnerable to timing attacks that infer your web history. It’s a shame but that’s just the reality of caching. Shared caches were never as useful as claimed due to the large numbers of versions of most resources.


LocalCDN is an extension I would recommend, both for privacy and performance reasons.

https://www.localcdn.org/


Decentraleyes is what I use. I assume they're similar https://addons.mozilla.org/en-US/firefox/addon/decentraleyes...


LocalCDN is an updated fork of Decentraleyes.

Decentraleyes hasn't been updated in ages, has few assets and its assets are massively out of date.

https://git.synz.io/Synzvato/decentraleyes/-/tree/master/res...

vs

https://codeberg.org/nobody/LocalCDN/src/branch/main/resourc...


My Decentraleyes was updated in November 5th, that's not that long ago. LocalCDN extension has also a comment that Mozilla doesn't regularly monitor it for security.


LocalCDN is much, much more actively developed than Decentraleyes.

https://codeberg.org/nobody/LocalCDN/commits/branch/main vs https://git.synz.io/Synzvato/decentraleyes/-/commits/v2.0.15

I'd imagine if LocalCDN got more popular than Decentraleyes, it would probably get Mozilla's seal of approval as a "recommended" extension. Then again I'm not entirely sure what their approval process for that looks like. Currently Decentraleyes has about 100x the userbase.


Wow. This is really neat. I’m going to give this a shot!


I wouldn't say all speed related perks, CDNs for resources like that are still probably wider (and therefore closer) and faster than whatever is hosting your stuff for most sites. Overall it is a pretty big cut out of the performance selling point though.


This sounds overdramatised: caching once for a website you frequent still works just fine, with all the speed benefits on subsequent loads. You're definitely not going to be noticing that the browser now has to build a few more caches than "just the one".


IIRC browsers (chrome and/or safari?) haven’t cached assets like that cross origin for years.



Use uBlock Origin, Multi Account Containers, Privacy Badger, Decentraleyes and CookieAutoDelete with Firefox. Make sure you aggressively clear cache, cookies, etc., periodically (with CookieAutoDelete). You’ll probably load the web servers more and also add more traffic on your network, but it will help protect your privacy since most websites don’t care about that. When websites are user hostile, you have to take protective measures yourself.


Doing this will make it trivially easy to fingerprint and track you on the web, as the set of people who use non-defaults like this list is 0.000001% of the total possible user space for their area, and your IP address probably only changes rarely or never

A better way to protect yourself is to use a browser with tracking protections on by default, and leave the settings alone. You may see a few more ads but you’ll be a lot less tracked as a result.

If personal convenience is the priority, then of course Adblock and so on to your heart’s content, but if not being tracked is the priority, reset your browser settings to default and remove weird addons that your neighbors don’t use.


I don't see how using containers in Firefox or auto-deleting cookies would have any negative effect here.

None of the cache deletion/isolation addons should inject any Javascript into the page or alter headers in any way, so they shouldn't be detectable to sites you visit. So in terms of unique behavior, all that site isolation means is that you're going to hit caches more often and be missing cookies.

I mean, sure, a website can recognize that you don't have any unique cross-site cookies to send them and make some inferences based on that, but the alternative is... having a unique cross-site cookie. So it's not like you're doing any better in that scenario.

I can see an argument against a few of these like DecentralEyes, since they change which resources you fetch at a more micro-level. But uBlock Origin and Multi Account Containers seem like strict privacy/security improvements to me.

UBlock Origin especially -- if you care about privacy, you should have that installed, because outside of very specific scenarios your biggest threat model should be 3rd-party ad-networks, not serverside 1st-party timing attacks/fingerprinting. No one should be running Chrome or Firefox without Ublock Origin installed.


Auto-deleting cookies or other content in a way that doesn't resemble Safari ITP would indicate that a device at your IP address is constantly losing tracking cookies in an uncommon manner, theoretically increasing your trackability.

Websites can only make inferences based on the absence of unique cross-site cookies if you are configuring your browser in non-default ways. If all Firefox 85+ users are partitioning, then any inferences drawn from that behavior do not increase your trackability — and it could well decrease it, as those Firefox 85+ users will be joining the swarm of Safari users whose browser has already done the same sort of partitioning for a couple years.

Multi Account Containers are an oddity, and alone they would not be particularly distinguishable from a multi-user computer (which, at a home residence, could be unusual; many people don't have User Accounts on a shared device). However, when combined with cross-container tracking infection (such as URL parameter tags designed to survive a transition to another container, e.g. fbclid or utm_*), it's possible to identify that a user is using containers, which is a very rare thing and not available by default, thus increasing risk of being tracked.

UBlock Origin allows far too much customization for me to prepare any clear reply there. I imagine it is possible to run UBO with a ruleset that only interferes with requests to third-party adservers, without letting the first-party know that this is occurring. I doubt, however, that a majority of UBO users are running in such a circumspect mode. Adblocking often requires interfering with JavaScript in ways that are easily visible to the first-party (who has a vested interest in preventing ad fraud).

Fingerprinting is a known defense against fraudulent clicks, so there's a lot to puzzle over there. But I definitely don't like to take active steps to make myself stand out from others. I'm annoyed that I'm tracked a little on the web, but I'm indistinguishable from the general pool of "users with default browser settings" today. That's a type of protection that addons can't provide. I'm not wholly certain what I think yet, but happily the browsers continue advancing the front of protection forward, so maybe by the time I decide it won't matter anymore. YMMV.

ps. I'm glad to see your much more nuanced consideration of this balance, and I wish that more took your careful approach here when recommending "privacy" setups to others.


> Websites can only make inferences based on the absence of unique cross-site cookies if you are configuring your browser in non-default ways.

But if the defaults don't block those cookies, then the alternative is that you have unique cross-site cookies, which are an instant game over. Having a site make inferences about you is preferable to having a unique cross-site cookie set that can perfectly identify you across multiple websites.

> [...] and I wish that more took your careful approach here when recommending "privacy" setups to others.

Similarly, I appreciate your approach and concerns, and you are correct that browser uniqueness is a valid concern, one that many people don't consider. But I fully stand by my advice. Your first priority as a user who cares about privacy needs to be blocking unique cross-site cookies. If you have them set, it's just game over, it doesn't matter whether or not someone is fingerprinting you somewhere else.

Your priority list should be:

A) block cookies and persistent storage that can track you across sites.

B) block tracking scripts from ever executing at all.

C) keep your browser from standing out.

D) etc...

uBlock Origin is the easiest, simplest way that you can make progress towards addressing A and B. To your overall points about stuff like advertising networks looking to prevent fraud, this is exactly why it's important to block advertising networks; they're the low hanging fruit that's most likely to be trying to fingerprint you at any given moment. To your point about it standing out that you don't have certain query params set, those query params are unique identifiers and referrers. If you don't delete them it's game over, you have been identified. You can't blend into the crowd if you have a tracker attached to you.

There are very few one-size-fits-all approaches to security/privacy, but I fully stand by the belief that virtually every single person running Chrome or Firefox should have uBlock Origin installed. I don't have much nuance or any caveats to add to that statement: block unique identifiers first, worry about fingerprinting second. You don't need to worry as much about your browser standing out if you block the majority of tracking scripts from reaching your browser in the first place, and in most (not all, but most) cases you should be more worried about 3rd-party tracking on the web than 1st-party tracking. That's just where the current incentives are right now, and it's important that we calibrate our threat models accordingly.


The more people that install these, the less unique you become. Also if you are disabling JS execution via uBlock they aren't getting this list. What you are suggesting is essentially security by obscurity and this is failed already since it is highly unlikely my neighbor's browsing stream looks anything like mine.

What these plugins do is make the tracking job more difficult for the adtech guys, and the more complex these systems become, the higher the costs to the tracker and the higher the likelihood they screw up. It's defense in depth.


Every browser already has a unique fingerprint. uBlock origin does a ton to improve privacy, it’s foolish not to use it just to avoid fingerprinting.


If you live in the Bay Area chances are plenty of others do the same thing.


And for the rest of the globe?


Agree, but substituting multi-account containers with temporary containers https://addons.mozilla.org/en-US/firefox/addon/temporary-con...


This looks excellent. I've wanted something like this before but wasn't aware of this extension. Thanks for sharing :)


Cookie auto delete is fairly useless. It can’t delete supercookies. Whereas temporary containers takes care of everything.


temporary containers is really nice. but how can you replace MAC with it? I tried before couldn't assign some domains to "permanent" containers.

eg. I'd like use temp containers all the time, except for some sites like YouTube where I'd like it to always open in a YouTube container


I've combined it with Firefox's Multi-Account Containers. It works exactly as you've described, when used in tandem with temporary containers.

https://addons.mozilla.org/en-US/firefox/addon/multi-account...


Oh nice! I've been wanting a container extension that just works on every site by default.


Or enable private browsing all the time. You'll have to log into your accounts every time you open your browser, but that's not really a big deal with a decent password manager.

[1]: https://support.mozilla.org/en-US/kb/how-clear-firefox-cache...


Can you be tracked within the private browsing mode though? For instance in Chrome private tabs I know if you log in to something then open a new tab, that tab retains the cookies from the private session until you close all private tabs. Is this the same with Firefox? I'm hesitant to install yet another extension but I'm wondering if this one mentioned elsewhere in this thread will fix it, if it is the case with firefox

https://addons.mozilla.org/en-US/firefox/addon/temporary-con...


No, in Firefox I've seen each private window is a separate unrelated container.


Looks like I was wrong. All private windows and tabs belong to the same temporary container sharing cookies. Temporary Containers remedy this.

[1]: https://medium.com/@stoically/enhance-your-privacy-in-firefo...


Temporary Containers > any kind of auto-delete hacks


Unless you’re clearing cache, local storage, and HSTS, They can still track you. We haven’t got to the other things that can keep state that don’t reside in the browser. Not to mention fingerprinting the device itself, I’ve seen fingerprinting minute differences in CPU since they aren’t all identical.


Any reason to not go all-in and just use Tor? That's what I've been doing lately, although I'm not a web engineer, so I may not be doing the optimal thing.


I've found Tor to be too slow for everyday use. Plus you'll regularly get hit with (sometimes near-impossible) recaptchas.


Oh yes, there are definite downsides. There are plenty of sites that won't show you anything. I've taken the tact that there is more on the net than I could possibly view in a million lifetimes and just move on. I could see some people taking exception here. And yes, I don't bother with google's captcha, since if that shows up, it'll most likely never work, even if you keep trying. Some of the others will let you pass after one successful test. I don't find tor to be too slow though.


You forgot NoScript.

uBlock Origin with privacy lists negates the need for Privacy Badger.

Decentraleyes is neat but I've found multiple sites it breaks.


Why NoScript? uBlock Origin in medium or hard mode can be used instead.


I use umatrix... as long as it lasts.


umatrix is built into ublock origin now, just enable advanced mode in ublock.


thank you, that is great news!


Why not use Brave? It has all of this, with Fingerprint protection turned on by default.


Brave doesn't have the features offered by those extensions, it doesn't have anything equivalent to multi account containers, it doesn't have DNS emulation (unless you install Decentraleyes) and it doesn't auto delete cookies (you still need to install Cookie Autodelete). The built in ad blocker is not as advanced as uBlock Origin and that's why I installed the latter as an extension (I turned off the built in one). Anyway IMHO the biggest limitation currently is the lack of containers, because it needs to be built into the browser, there is no 3rd party extension that can give you that.


Firefox’s Tracking Protection blocklist blocks many known fingerprinting scripts by default.

Firefox also has an active fingerprinting protection mode that spoofs the unique values returned from some JavaScript APIs (such as locale, time zone, screen dimensions, WebGL), but this feature flash is currently buried in about:config because it can break websites. How to enable fingerprinting protection anyway:

https://support.mozilla.org/kb/firefox-protection-against-fi...


Because cryptocurrency is a scam.


Then don't use the cryptocurrency part.. it isn't all-or-nothing.


With Brave you will still see personalized ads on some sites which I do not want see


If you’re doing all that it seems like a lot of cognitive dissonance to keep using the internet. It is optional, you know!


Internet in a modern world isn't option, really, you know!


Twitter uses these type of cookies. They even use cookies that do not contain any reference to the twitter domain. It is how they track people who have been suspended on the platform


I haven't looked into Twitter's cookies specifically, but if I understood you correctly, I think you're misinformed about what the domain of a cookie does. It's normal to not specify the domain because that's the only way to exclude subdomains, which is important for security.


I think he means that they use another domain (third-party cookies), not that they have no domain at all


Browse twitter through nitter and all these problems go away, and it's actually a usable interface on top.


... and if you use the FF extension "Privacy Redirect", then all twitter links are redirected to nitter, which is sweet.


Nitter looks pretty cool. Didn’t know about that one. Does something like that exist for Facebook as well?


That's not how cookies work. See mdn which says about the domain flag: "If omitted, defaults to the host of the current document URL, not including subdomains.": https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Se...


What other domain(s) are they setting cookies on? I'm not seeing any (but I am not logged in).


citation?


Seconded - I'm not normally the [Citation Needed] guy but this claim deserves either an explanation or a link to an article that gives one. If it's true it'll have a ready audience willing to amplify it, if it's false it should disappear


I wish there could be a way to see which root site set those cookies. For example I wish we could see twtracker.com supercookies are set in some iframe in twitter.com.


Will switch to firefox because of this, absolutely disgusting.


Safari, Chrome, and Edge already partitioned the HTTP Cache by site; Firefox was the last major browser not to. It's great that Firefox is doing this, but it's not a differentiator.


The way they phrased it, in the post implies otherwise?

> These impacts are similar to those reported by the Chrome team for similar cache protections they are planning to roll out.


See https://developers.google.com/web/updates/2020/10/http-cache...

"The feature is being rolled out through late 2020. To check whether your Chrome instance already supports it: ..."



Worth pointing out that Chrome has been partitioning cache by domain since chrome 86 (released Oct 6th 2020).

https://developers.google.com/web/updates/2020/10/http-cache...

Does anyone know if these protections go further or differ significantly?


Also that Safari - so including iPhone - has done it in some form since 2013 https://bugs.webkit.org/show_bug.cgi?id=110269 and all the time since at least 2017 https://twitter.com/cramforce/status/849621456111624192


That was the first thing that came to mind when I read this article. It looks very similar, though Firefox seems to be addressing more than just resource caching, like addressing the HSTS tracking scheme. Also, I would not be surprised if Chrome eventually did partitioning for anything but Google resources; they surely won't do anything that hurts their surveillance schemes?


Tracking has become so bad that it seems like users have to spend money (more bandwidth) to protect themselves from it.

Crazy and sad to see where we've come :\


Nobody would accept interconnected face scanners in every building they walk into but online it's somehow okay.


> Nobody would accept interconnected face scanners in every building they walk into but online it's somehow okay.

There are cars with license plate scanners that wardrive the world. They scan plates in shopping centers, businesses, and even apartment buildings so on the off chance law enforcement, repo businesses, or anyone who wants to know where your car is parked can track you.

People accept that. Or rather, most are blissfully unaware that it happens.

If your grocery store added face tracking to their existing security cameras, would you even know it? Would you know if they sold that data?


People accept that in the USA. Privacy laws in most countries make the free interchange of such information illegal


Fair point.

The Wild West mentality in the US kind of sucks when technology allows even small businesses the ability to screw over large numbers of people.


Even worse: foreign powers get that ability to screw whole country.


> Or rather, most are blissfully unaware that it happens.

That is the problem.


Amazon Go disagrees with you. We're moving that direction and I think the pace will accelerate.


You know what is surprising? A lot of people will easily accept those face scanners too :/


A lot of people accept that which is beyond their control. That doesn't mean they are Ok with it, just that they don't know how to do anything about it or often that it's even happening.


Wasn't there an article about paying with your face around here just a bit ago? People clearly don't just tolerate this, but embrace it.

Only people from places where it's too late to go back (like China) are aware of the dangers of these systems, but they can hardly warn the rest of us and when they do, we generally don't listen as "something like that surely wouldn't happen in my free country".

It would seem that people only get slightly spooked when a government does something that could impact their privacy (even when it actually doesn't - see the recent covid tracking app backlash in basically every country) but when private companies do it, they eat it up hapily.


> Wasn't there an article about paying with your face around here just a bit ago?

There's not much risk to 'pay with your face' for Apple/Google/Samsung pay given it's all on-device biometrics that never leave the phone, but a similar situation is when Google paid $5 to people willing to submit their face to help with facial recognition training in the then-upcoming Pixel 4 phone.

https://gadgets.ndtv.com/mobiles/news/google-pixel-4-usd-5-f...


It wasn't about that, it was about cameras and screens mounted on kiosks that you would just look at and make a hand gesture to pay. Not sure who was doing the processing, but it certainly wasn't anything the users owned/controlled.


> People clearly don't just tolerate this, but embrace it.

If there were an opt-in way of doing this, it wouldn't bother me. Similarly with online tracking, if it were opt in only, it wouldn't bother me.

What is frustrating is the lack of transparency or ability to control who and where my data is collected.


Maybe not yet, but neither did we arrive at the current state where e.g. London has an estimated 691k registered CCTV cameras [0] (and many hundreds of thousands more unregistered, as you don't need to register ones that point only at your own property) in a day. Note that a lot of those are already interconnected and linked to recognition systems: TfL and various borough council cameras in particular as part of anti-"serious crime" initiative are an example [1].

Private exercise of the same technology is merely deterred, but not stopped by GDPR (especially now that UK is "happily gone" from "EU overregulation"...).

And of course that ignores China which has cities that have both more total(Beijing, Shanghai) and more per-kilopop (Taiyuan, Wuxi) cameras than London.

[0] https://www.cctv.co.uk/how-many-cctv-cameras-are-there-in-lo... [1] https://www.nytimes.com/2020/01/24/business/london-police-fa...


Which one more annoying for you between today's tracker and early 2000ish popup on top IE?

Or remember adware on windows XP and how many antivirus tools advertised to eradicate that.

* they're hilarious comparison but I found it amusing.


From a purely web browsing experience the first iPad 'should be' powerful enough to browse ANYTHING out there these days. But it can't. The last few models will increasingly have the same issues as the sheer volume of muck and cruft that's included with the advertising gack just continues to explode.

I'm definitely of the opinion that our web browsing devices are marketing tools that we are allowed to use for media consumption.


The first iPad sucked a whole bunch. Only 256mb RAM especially hurt. But I hear what you’re saying.


I beg to differ. Of course if you compare it with today's spec it sucks... its been more than 10 years since launch! I can still use my iPad 1 to watch Netflix and play some old games I like (i.e. Carcassonne). The battery still works pretty good. I would say that the iPad 1 rocked, and should be able to browse today's web ... except it can't because of the amount of cruft that is pushed our thoughts nowadays.


I loved mine at least a year and was quite happy for another year. Some sites worked awesome, others sucked hard due to crazy pay loads.

I blame shitty sites more than Apples architecture :(


If it still got software updates to Safari, then maybe. But also, browsing the web on my iPad Air gen 1 is pretty painful.


"a tracker can create a supercookie by “encoding” an identifier for the user in a cached image on one website, and then “retrieving” that identifier on a different website by embedding the same image. To prevent this possibility, Firefox 85 uses a different image cache for every website a user visits. That means we still load cached images when a user revisits the same site, but we don’t share those caches across sites.

In fact, there are many different caches trackers can abuse to build supercookies. Firefox 85 partitions all of the following caches by the top-level site being visited: HTTP cache, image cache, favicon cache, HSTS cache, OCSP cache, style sheet cache, font cache, DNS cache, HTTP Authentication cache, Alt-Svc cache, and TLS certificate cache."

Clever !


> In fact, there are many different caches trackers can abuse to build supercookies. Firefox 85 partitions all of the following caches by the top-level site being visited: HTTP cache, image cache, favicon cache, __HSTS cache__, OCSP cache, style sheet cache, font cache, DNS cache, HTTP Authentication cache, Alt-Svc cache, and TLS certificate cache.

(emphasis mine)

This has negative effects on security, as has been pointed out previously by others: https://nakedsecurity.sophos.com/2015/02/02/anatomy-of-a-bro...

Imagine a widely used legitimate non-tracking resource, say, a shared JS or CSS library on a CDN. Currently, if that CDN uses HSTS, no matter how many independent websites incorrectly include said resource using an http:// URL, only the first request is MITMable, as every subsequent request will use the HSTS cache.

However, now, every single site will have its own separate HSTS cache, so every single first request from each site will independently be MITMable. Not good. This makes HSTS preloading even more important: https://hstspreload.org/

(Good news, if you have a .app or .dev domain, you're already preloaded.)


Aren't most CDNs (at least, the ones likely to have popular resources on several sites) using the hsts preload list already?


Not all CDNs are HSTS preloaded, and this affects linking too. Pick any random popular site that is widely linked but not preloaded, and now it's more vulnerable.


A good portion of things are preloaded anyway, I don't see an issue with it.


Are there any plans for complete partitioning?

I'd like to see a point where browsing on two different websites are treated as a completely different user. Embeds, cookies, cookies in embeds, etc.


That's probably Firefox's own Firefox Multi-Account Containers[0]. Groups caches/cookies into designated categories for each tab (personal, work, shopping, etc.), with smart recognition for assigned sites.

[0] https://addons.mozilla.org/en-US/firefox/addon/multi-account...


Someone should do a study on the performance impacts of using something like this on all sites for various kinds of "typical" web browsing profiles. I'm honestly guessing a lot of the losses would be in the noise for me personally.


There is an additional Firefox extension that integrates with multi-account containers, Temporary Containers. This is highly configurable - I have it create a new container for every domain I visit, with a couple of exceptions that are tied to permanent containers.

I run that on my personal devices.

At work, there is so much in terms of SSO the amount of redirects that happen mean that temp-container-per-domain breaks all sorts of workflows, so I go without on the work machine.

I notice no major difference between these two configurations, although I'm sure that there would be things that are measurable, though imperceptible.


I've had first party isolation turned on for possibly a couple of years now (certainly since before the pandemic) and it does break a small number of sites but nothing I particularly care about. Except that one internal tool that I've taken to loading in Chrome :P.

I don't recall the last time I had to temporarily disable it to allow something to work.


Have you tried Temporary Containers[0]?

I use it to automatically open every new tab in its own temporary container.

[0] https://addons.mozilla.org/en-US/firefox/addon/temporary-con...


I believe the 'First-party isolation' feature does this, but you need to enable it from about:config, and even then, I'm not sure if it is complete or bug-free.


This is called First-Party Isolation, a key principle of the Tor Browser and an optional preference in Firefox.


I'd like to see something like the firefox container extension automatically open a new container for every unique domain name. It could get tricky for eg. federated logins, so I'm not 100% sure what the implementation would look like. But it'd be nice to have the option.


The Temporary Containers addon[1] does this. Combined with the usual Multi-Account Containers "always open this site in..." mechanism you can have some sites always open in a single container, but all other sites open in temporary containers that get deleted shortly after you close their tab.

[1] https://addons.mozilla.org/en-US/firefox/addon/temporary-con...


For clarity - the workflow is basically that all sites would be temporary containers, except sites you explicitly set to be managed by Multi-Account Containers?

edit: I'm trying this out, seems to work nicely - but assigning all the sites that i want permanent state on to different account containers is a bit of a chore. Feel like i'm doing something wrong there.

But the temporary containers are working great


I don't want the containers to be transient. I want to be able to persist session cookies and local settings.


I commented on the main post, but First Party Isolation is exactly what you want, and breaks relatively few websites (and there's an extension to turn it on/off if you do use a website it breaks).


privacy.firstparty.isolate :)


This is definitely a step in the right direction. Problem isn't the browser's ability to run code such as JS which makes it possible to create things like supercookies. Problem is web pages are effectively applications that are semi-local (they can bind to local resources such as cookies, caches, storage, and connect to local peripherals) and the security model is very different between viewing HTML documents and running untrusted apps.

No good comes from downloading random, untrusted native applications from the net, installing them locally, and trusting these not to infect your system with malware. No good comes from loading random, untrusted applications into your browser either, and trusting these not to infect your browser, either.

Basically everything on the web should not only be sandboxed with regard to local system (this is mostly established) but also be sandboxed per site by default. Hidden data, or anything that's not encoded in the URI, should only move between sites at the user's discretion and approval.

I've had personal plans to ditch my complex adblocking, multi-account/temporary container and privacy setup in favour of a clean-state browser that I launch in a file-system sandbox such as firejail for each logical browsing session (like going shopping for X, going online banking, general browsing). Basically an incognito session but with full capabilities as sites can refuse serving incognito browsers. Normal browser setup with everything wiped off when the browser exists.

I have some dedicated browsers for things like Facebook where I whitelist a handful of necessary cookies and forget the rest on exit. This, however, won't clear data other than cookies. I think that per-site sandboxing would mostly solve this problem. I don't particularly care what data each site wants to store on my browser as long as it won't be shared with any other site.


The partitioning thing is terrible for people with slow/unstable connections, despite the security gains.

Is there a way to disable it? Or should I better think about installing a caching proxy to avoid the redundant traffic?


I think you’re overestimating the impact of this. Most web site content these days are served from the web site owner’s own domain.

It’s only if a.com and b.com have (for example) the exact same image URL (c.com/img123.jpg) embedded, and you visit both sites, that this cache partitioning will make a difference.

In essence, there’s very little legitimate Internet traffic that would be effected by this change, but lots and lots of creepy spyware behaviour will be prevented.


What about JS libraries or CSS hosted by a CDN? I'm thinking jQuery, Bootstrap, etc etc. I learned that using a common CDN was the way to go because the content would likely already be in the user's cache and often not need to be loaded.


This was discussed when Chrome made this change. It makes almost no difference because to get any saving you have to have lots of websites that use the same CDN and the same version of jQuery etc. Unlikely enough to not matter.


Indeed, and the savings are fairly small even in the best case, jQuery is 28kB gzipped, a drop in the ocean of the multi-megabyte payload of most big sites these days.


CDNs are/were not only or even primarily used for caching but to minimize bulk traffic to your site and prevent hitting max concurrent per-domain HTTP request limits and HTTP/1.1 head-of-line blocking.


I see what you're saying. But, for example, all of the new DNS queries for things like jQuery and Google Analytics surely add up to something noticeable.


Statistically significant: maybe. Noticeable to humans: almost certainly not.


On a proper internet connection, you are right, but when that connection is unstable or capped, it's extremely noticeable.


You’d be better off installing a caching proxy, so that all connections from all of your devices share one cache, rather than only altering settings in one browser.

If you’re a Mac user with more than one of any kind of Apple device on your network (like, two Macs), you can install their Server app on any macOS and enable software update caching as well.


Other browsers have been doing this for years. It’s fine.


I fully agree.

I think turning `privacy.partition.network_state` off in about:config should do allow reverting the change at least.


Can anyone explain the fingerprinting issue, unrelated to cookies. Visit any one of these many sites that show you what your browser knows about you, it doesn’t matter if using Firefox with fingerpring blocking enabled, the site reveals a tremendous amount of information in your fingerprint. Firefox doesn’t stop any of that, despite its settings that purport to do so. It's always the same information, not scrambled or randomized, from site to site.


Firefox’s default anti-fingerprinting is just a blacklist of common fingerprinting scripts.

It is incredibly difficult to make a browser fingerprint non-unique. Only the Tor browser has strict enough settings with a large enough user base to overcome fingerprinting.

If you don’t want to use Tor, try these:

- uBlock Origin (which has a larger blacklist of fingerprinting scripts)

- Enable the privacy.resistFingerprinting setting in about:config to make your browser more similar to other users with that setting enabled (but not entirely non-unique)

- The nuclear option: arkenfox user.js [1]. It’s github repo also contains a lot of further information about fingerprinting.

[1] https://github.com/arkenfox/user.js


Which actually makes sense. If you have a "zero-fingerprint" browser it will become useless, because you cannot use any advanced features other than displaying HTML.


Brave's method of slightly randomizing the metrics gets around that. They call it farbling.


What I mean is, the fingerprint that is sent to any of these sites accurately describes my machine, and FF never attempts to hide or scramble that information despite its anti-fingerprint setting.


Firefox is so good.

It's a continual source of amazement for me that a majority of HNers are using a browser made by the largest data gobbler in the world, instead of one that actually tries to prevent spying on users.


It's probably one of the most obscure reasons, but keep Chromium around because it's the only web browser with a JIT-backed javascript engine on ppc64le. Firefox has to run everything interpreted, which is actually fine for most sites, but bogs down on JS heavy web app type things.

On a much less niche side of things, a lot of web apps like Teams, Zoom, and probably many others are only fully functional on Chromium, thanks to WebRTC specifics and some video encoding stuff that's only on Chromium. Don't know the details, but things like video and desktop streaming are limited to Chromium.

That could very well be an artificially enforced restriction, but I don't think it is. I think firefox is moving towards feature parity with Chrome on this one, I hope so anyway.


Somewhat ironically, Google Meet works very well for video streaming and desktop sharing on Firefox. So I think Firefox isn't missing anything.


Slack calling doesn't work on FF as well. This + Teams + Zoom is a big gap, especially in these COVID times.


It's kind of sad that WebKit doesn't support it…


Eh, I agree in general, but in this case, Chrome implemented network partitioning in Chrome 86, which became stable in October 2020, earlier than Firefox.


Google websites work better on chrome. Not sure if it’s because google is doing something nefarious or if Firefox is just not keeping up with google website technologies.

So, I’ve trained my brain to use chrome as an app only for google websites. When I need to check gmail or YouTube or google calendar, I use chrome. Otherwise I’m on Firefox or safari.

It’s worked pretty well. I found I was only really unhappy with Firefox when using google websites. No longer a problem.


It’s the latter, but I would describe it less as Firefox not “keeping up”, and more as Google deploying pre-standard protocols (like SPDY) into Chrome first, before ever documenting the protocol; let alone trying to get it turned into a standard (like HTTP/2.)

Chrome had SPDY support not just before any other web browser did, but before any open web server did—because Chrome had SPDY support before Google ever documented that there was such as thing as “SPDY.” It was, at first, just turned on as a special Chrome-to-Google.com accelerator, spoken only between that browser and that server, because only they knew it.

I don’t fault Google for this: they’re doing “internal” R&D with protocols, and then RFCing them if-and-when they turn out to have been a good design for at least their use-case with plenty of experimental data to confirm that. Which is exactly how the RFC process is intended to be used: spreading things that are known to work.

It’s just kind of surprising that “internal” R&D, in their case, means “billions of devices running their software are all auto-updated to speak the protocol, and start speaking it—at least to Google’s own servers—making it immediately become a non-negligible percentage of Internet packet throughput.” (Which is a troubling thing to have happen, if you’re a network equipment mfgr, and you expected to have some time while new protocols are still “nascent” to tune your switches for them.)


HTTP 1.1 is faster when you're not downloading megabytes of JS. I rarely browse AMP sites but when I do I'm amazed at how user hostile they are compared to a strictly filtered browsing experience.


It's also relevant under an antitrust PoV.


> Google websites work better on chrome. Not sure if it’s because google is doing something nefarious or if Firefox is just not keeping up with google website technologies.

For a number of sites like YouTube and GMail, it's because of Google. If you change your useragent to look like Chrome, you get served a JS payload that Firefox is fine with, and it is faster.

If your useragent isn't Chrome, they'll serve you a less optimised payload, but which tends to have wider support.

They seem to have made a tradeoff - one that generally isn't necessary under Firefox.


What problems do you have? I use Firefox exclusively and I'm a heavy Google app user too (laziness...), but I can't remember ever having a significant issue


I've had weird little breakages. Right now in Firefox, I am unable to search within a given Youtube channel. Works fine in Chrome.

Edit: I am a diehard Firefox user and fall back to Chrome only when I have to because of some weird breakage. One of those is editing within Atlassian's Confluence: find within a Confluence page doesn't work right in FF, and I've often had @name references messed up too upon saving. Chrome works fine.


I’m on a MacBook Pro with a discrete graphics card. YouTube never performs well for me on Firefox. It takes time to buffer the video when I skip ahead or back. And that’s with me being on Fiber internet. Same goes for Gmail. It takes longer to load emails. It’s minor annoyances that add up. For some reason, Chrome always works better whenever I switch the applications.

There’s a good chance my MacBook is not supported properly for Firefox as I’ve run into some internet threads about. But at this point, I’ve settled on this solution. It also makes me spend less time on YouTube once chrome is closed down and I’m solely on Firefox.


For some reason Firefox absolutely cannot play 720p+ 60fps videos on YouTube for me, whereas opening the same video on Edge I can play 4K 60fps videos without a single problem.


Google refuses to let Firefox have their voice typing feature.


For those down-voting, I should have added this: https://bugzilla.mozilla.org/show_bug.cgi?id=1456885


I use Gmail, YouTube, Calendar and Sheets through Firefox and never noticed a difference. What's not as good?


I’ve replied to one of the other replies above.


I forgot, stories of FF on Mac being bad are pretty common. Hope they focus on that soon.


What is exactly better? I am using FF and browse Google websites, but never noticed anything.


I replied to one of the other replies above.


I've been switching to Firefox for private use a year ago, but overall I find it not good. Weird bugs, usability issues, dev tools not that great, etc. And privacy-wise, the defaults don't seem great either. There was something about containers that are supposed to prevent tracking between different domains, but if you actively have to create containers rather than them being automatically applied on each domain, then that's not much use since it makes things cumbersome to use.


You need the temporary containers plug-in to manage it for you.

https://addons.mozilla.org/en-US/firefox/addon/temporary-con...


This is not something that should require a plugin. Each plugin is an additional source I need to trust.


The reason it is a plugin is because it's really complicated and confusing. Even as someone who has a deep understanding of web protocols I get tripped up by temporary containers sometimes when things don't work quite right.

Firefox built the core container technology, which drives their built in Facebook container (isolating Facebook from everything else). But isolating everything has a lot of weird edge cases, and I can't blame them for not supporting it out of the box.


Lot of ‘Do as I say, not as I do’.


How do you know user-agent strings of HNers? My guess would be that FF has above-average usage here, with FF topics getting upvotes regularly.

Hmm, come to think of it, does anybody know an easy Chrome-blocking trick for displaying "this page is best viewed using FF"? Might be an effective deterrent for non-"hackers" and the start of forking the web for good.


> a majority of HNers are using a browser made by

How do you know what browser the majority of HNers are using?


I'm curious: what is the browser that the majority of Hacker News users are using?


I used chrome from 2008 to about 2013. At the time Chrome was fast and their macOS experience was amazing. But you could tell that Google was focusing more and more on integrations and services and less on the browsing experience.


I have not noticed Fiarefox to be faster


I haven't noticed it to be slower, but I'd accept slower for the privacy benefits.


I have noticed it to be slower, and with more broken websites. I still prefer it over chrome.


I don't know about others but when I click youtube links on reddit the back button is disabled. Not sure if it's a bug or by design but I don't remember it always being that way.


It depends. For work related stuff I will always choose speed and responsiveness.


Speed, especially with a large number of tabs opened, and the Dev tools. Chrome's are the most polished by far, and it's trivial to do remote debugging on Android devices.


Firefox sends everything you type in the address bar to google by default.

Would you be able to tell the difference between stock firefox and stock chrome if all you saw was the fiddler session? I don't know, I haven't tried. I did look at a firefox session in fiddler and I was not impressed.

Pick your poison. If you configure all the settings in firefox properly it might be acceptable. But can you just do the same in chrome? If not, you can use the privacy friendly chromium browser of your choice. Most firefox users wont take the time to configure it properly and the data will still reach the data gobbler.

Edit: an interesting comment from the other firefox thread https://news.ycombinator.com/reply?id=25916762


Even if you've changed the search engine?


Nope, I believe it also stops if you disable search hints. They send keystrokes to the search engines because that’s how you get the search suggestion when typing in the URL bar.


What if you change the search engine in Chrome and disable all telemetry? This is the comparison we should be making.


You're stretching really hard to make them equivalent. There are a number of reasons to use FF besides telemetry.


This is whataboutism. You can talk about other reasons for using firefox if you'd like (although you'd have to mention what those reasons are.) We're talking about privacy right now and firefox does not fit the bill.


How much money is a user actually worth per year on average? And why can I not pay that amount of money and be left alone, not seeing any ads, not being tracked, not being sold?


Annual average revenue per (active) user (from North America) is about $180 for Google, $150 for Facebook, and $80 for Twitter. As you might expect, Amazon has far higher revenue per user ($700), and Apple is about $140, but they're both more like $30 when you only count their advertising revenue instead of much lower-margin retail and hardware manufacturing businesses.

Searching for "ARPU" news will give articles with new takes every time anyone publishes new quarterly numbers, but those are roughly accurate. Obviously, they can be distorted to tell whatever story you want by messing with market segmentation, time period, and what kind of revenue/profit/margin/expenses/capital you want to invoke, but those are rough numbers.

To be clear, those are first-party advertising companies, this isn't the value of a page view to a random blog with side-roll ads from some third-party advertisers/trackers. I have no idea what Taboola/Outbrain chumboxes generate other than that they both have $1B revenue and there are about 5B Internet users worldwide, which means the average user is worth $0.20 per year to them. And it's reasonable to assume the majority of their revenue comes from wealthy English speaking adults, so maybe your demographic is worth $5 or something like that.


It's astonishing that the value is so high. I use the internet on N devices for probably 12h a day and I can't imagine I'm worth even a positive amount anywhere.

I max out free tiers of OneDrive/DropBox etc, use my free minutes of build time at the dev sites, I use some social media features but I browse Twitter and reddit on custom apps that don't show any ads. I never ever click an ad in an article or search no matter how interesting or relevant.

So if since I'm a net loss, that means that for everyone who is like me, there has to be someone who is an even larger gain for these companies. I have all of those services (Google, fb, twitter) and I'm still pretty sure I provide a negative revenue for all of them. So Someone needs to provide the revenue I don't. It's a scary amount. My internet activity is subsidized by someone who must be doing a scary amount of clicking on the sponsored google results, or something.


I wouldn't say the amount is that high, it's like $50 per month and that in comparison to all the value one gets out of the internet. Sure, things like Wikipedia or some YouTube channels have other revenue streams like donations that are nor accounted for and paid tiers probably subsidize free tiers, but still. And also in comparison to the prices of individual subscriptions for Netflix or YouTube Premium or a news paper like The Guardian which are each often on the order of $10 per month.


I'm still trying to imagine the way one exploits a lack of partitioning in the DNS cache...

1. It seems like client web pages cannot directly view the DNS information for a given domain name. So I would think embedding identifying information in something like a CNAME or TXT record directly wouldn't work. 2. I suppose a tracker could try to create unique records for a given domain name and then use request/responses to/from that domain to get identifying information. But this seems highly dependent on being able to control the DNS propagation. Short of my ISP trying this trick on me, I'm not really sure who else could manage.

I'm sure I am missing things in this brief analysis. I'd love to hear what others think about this cache.


"I'm still trying to imagine the way one exploits a lack of partitioning in the DNS cache."

There's a PDF here: https://www.ndss-symposium.org/wp-content/uploads/2019/02/nd...

Basically timing based. See https://www.audero.it/demo/resource-timing-api-demo.html for a demo of what's available in the browser's navigation and resource timing API. For example, I get this on a cached reload:

domainLookupStart: 52.090000128373504

domainLookupEnd: 52.090000128373504

More detail: The PDF explains some enhancements that make it more reliable, like publishing multiple A records and watching order, etc. Also, the demo link isn't really showing what you would do...in real-life the resource being downloaded would be marked as non-cacheable so that you would be measuring "DNS lookup was cached or not" instead of "Entire Asset was cached, therefore no DNS lookup happened".


It's always timing isn't it... Thanks for those links.


DNS could respond with unique IPv6 addresses and echo back on HTTP request.

But it's more likely they just use a large set of (sub)domains and measure timing.


Unfortunately, some trackers have found ways to abuse these shared resources to follow users around the web. In the case of Firefox’s image cache, a tracker can create a supercookie by “encoding” an identifier for the user in a cached image on one website, and then “retrieving” that identifier on a different website by embedding the same image. To prevent this possibility, Firefox 85 uses a different image cache for every website a user visits. That means we still load cached images when a user revisits the same site, but we don’t share those caches across sites.

Wait, so one form of "supercookie" is basically the same as the transparent gif in an email?

https://help.campaignmonitor.com/email-open-rates#accuracy


A few days ago there was a paper posted here about favicon cache being used for tracking [1]. I wonder if cache partitioning also prevents that?

[1] https://news.ycombinator.com/item?id=25868742


Favicons are mentioned in the article as one of the caches that get partitioned now.


We need to acknowledge also that recognising the user as he moves across pages and domains is sometimes needed to provide valuable services to the user.

Therefore, I believe, browsers have to provide a volunteer "tracking" functionality - when a web page reqests 3rd party cookies, a popup is shown to the user with the cookie values, description (as set by the owning domain), the list of domains already permitted to access the cookies and their privacy policy links, and options Allow Once, Allow, Deny Once, Deny.

So instead of fighting each other, service and the user had a chance to cooperate. Service only needs to describe the need clear enough.


The "problem" with that solution is that users are very willing to click any button necessary to achieve their goal, and in any dialog that prompts to allow tracking in order to achieve something else, most people will click allow.

Personally I don't think this is a problem, and people should be allowed to make that choice. But most of HN seems to disagree with me there, and feels that users need to be protected from making choices that could allow them to be tracked


It’s not much choice when the site will find some way of forcing or tricking you to allow tracking


Tracking is the cost. You have the choice to use the service and allow tracking, or not use the service.

Websites force you to accept tracking in the same way the store forces you to pay for your groceries


this exists on safari, edge and FF

https://developer.mozilla.org/en-US/docs/Web/API/Document/re...

https://developer.mozilla.org/en-US/docs/Web/API/Storage_Acc...

on safari, it's basically the only way to get access to third party cookies in an iframe since safari 13. I wish other browsers (chrome) would also enable this when third party cookies are disabled. On FF I think the rule is that you have to interact with the site beforehands and you get access automatically, failing that you can use this API. No idea how it works in edge


Wow, interesting.

So, to access 3rd party cookies I need to access a document DOM object that has that 3rd party origin? But such a document is not always available, is it...

Looks like the use cases targeted by that proposal are limited to an embedded iframe that wants to access its own domain cookies. I was thinking also about arbitrary doman.

Like requestStorageAccess(targetOrigin, keys...)


Yeah, it's useful for e.g. embedded like buttons or comment forms on external sites or sites with user hosted content (on a different TLD)


Which valuable services? I’ve had 3rd party cookies entirely disabled for a while now, and I haven’t noticed any services break, not even cross domain logins.


Maybe your services relied on the "supercookies" thus being immune to 3rd party cookies disabled :) ?

An example that I can imagine is a big onlite shopping company that has several domains, and they want a shopping cart that that works across all their domains.


A big online shopping company that moves you between several domains as part of normal use? tbh I've never seen a single site do this, so I don't feel bad saying "no, I don't think that's a legitimate reason to allow 3rd party cookies".

Besides, since they control all the domains, they can do redirects to associate your session across them easily enough. It's basically oauth at that point, which works just fine without 3rd party cookies.


not as common these days, but it's useful for services allowing people to create custom sites like blogger, where you'd want the comment form to be an iframe/embed of some sort that ties to the account of the platform, and the user-content is on a different domain.


That is when you have sign in and communication on the server side, not place data in the browser for tracking.


How do you suggest implementing "sign in" without setting a cookie?


Sign-in cookies are always first party. That's completely out of context here where we're taking about tracking cookies.


There's absolutely nothing except convenience preventing ad-tech companies from routing their 3rd party cookies through a proxy hosted on the first-party domain. If 3rd party cookies stop working, they'll just start having their customers set up CNAMEs on their own domains.


I’m not familiar with a distinct “sign in cookie” either. Do you mean a server side cookie / HttpOnly?


It's basically a cookie holding your session id, scoped only to the site and used only for auth purposes (or holding the session vars if you're doing client-side sessions)


So, just to reiterate, contrary to the comment I replied to, you are suggesting we "place data in the browser for tracking" a user's authentication state and session.


Ah, I think we're stepping on an overloaded term. I mean tracking as in "identifier connecting visits from unrelated pages used for data collection" and not "identifier used by the site you're connecting to for purpose of holding browsing session variables".


Please stop your gaslighting. You were the only one equating "place data in the browser for tracking" with all use of cookies, nobody else made that "mistake".


How else would you describe a login token? It's literally data in the browser for tracking who that user is and identifying them to the server!

All I'm doing is highlighting is that it's not as simple as some of these jUsT bAN cOoKIes folks would have you believe.

Blocking technologies that are used for invidious ad-tech will make it more difficult to support legitimate use-cases. Sleazy ad-merchants like Google will move on to something else built into the browser https://blog.google/products/ads-commerce/2021-01-privacy-sa... and normal site developers will be left in the lurch.

IMHO there isn't a technological solution to this problem, the only effective answer is regulation & hefty fines that make unethical tracking also unprofitable.


Perhaps, but that mechanism doesn't need to persist in the browser across termination of execution.


If only DNT had been enforced and respected, so much effort could have been avoided. I appreciate these protections, but it’s unfortunate this whole cat and mouse game is necessary.


Who would enforce it?


Who enforces anything? The government. Something similar to health records is the usual example.

I suppose coordinated action by citizens would have the same effect, but online privacy is such a complicated obfuscated issue that will never happen.


So eventually we will have private browsing for every site so that there is no possible cross pollination? How far can that be taken?

Or am I off in the weed here about how this will play out?


"Trackers and adtech companies have long abused browser features to follow people around the web."

Is this a confesion.

Browsers, including Mozilla, have continually designed and kept those features enabled by default, even when they are aware of the abuse.[1]

Mozilla is nearly 100% funded by a deal with Google.

I try to forget these facts every time I read some public communication coming from Mozilla, but they just keep coming back.

1. Mozilla, or any of us (yeah, right), could rip out some of the fetaures that advertisers abuse and create a more "advertising-proof" version of Firefox. Heck, we could create much smaller and faster Firefoxes. But no, there can be only one. Because reasons.

Take out that search bar and they would probably have to kiss the money from Google goodbye.

Getting rid of ads and tracking is not Mozilla's highest priority. Keeping online advertising alive is the highest priority because obvious reasons.

There is nothing in the contract we have with our ISP that says we must support online ads. That is the benefit of paying for something. There are actually terms and the possibility for enforcement.

Mozilla's ad-supported web has no terms. None that web users can enforce. Internet subscribers using "the web" have no power. Advertisers call the shots.


I think that's a little harsh. The same article describes using image caches as a means of tracking users across domains. Would you consider image caching one of the technologies developed solely for the purpose of selling ads that your theoretical perfect browser should rip out?

Browsers are essentially entire operating systems at this point. Ad companies hire engineers. It's inevitable that this engineers will find exploits. Your stance seems to completely ignore this fact. If it would be so easy to create your perfect, user centric, privacy first browser, why haven't you made it yourself?


Uhm... Tor Browser? Already exists, based on Firefox, with features merged upstream?


Ironically, the Hush extension for Safari (which aims to limit cookie tracking, amongst other goals) blocks that page.

I mean this one, not the Chrome extension of the same name. https://oblador.github.io/hush/


I'm slowly weaning myself onto private browsing through a VPN and the NoScript extension.


Wait does that mean hsts cache is per origin?

That seems like it would make tls stripping attacks a lot easier.


Maybe. But a more clever approach might be to limit the size of the HSTS cache per second-level-domin per orign. Or to randomly respect the cache. Or to simply make every request to both the TLS and non-TLS port but do so in parallel and discard the non-TLS response if the domain was in the HSTS cache.

I'm not saying any of those approaches is bulletproof, just that maybe they have a more complex strategy in mind to mitigate risk.


Those would be much worse strategies than even just not supporting hsts at all.

> Or to randomly respect the cache

If the goal is to manipulate a single request to insert malicious js that gets cached, you only need a single non tls request. If you're an on path attacker, you can probably get the user to request things multiple times (e.g. randomly break and unvlbreak internet connectivity) until you get lucky with an unencrypted connection. If you're trying to make a super cookie you can just repeat and average out the random failures (random pertubation almost never prevents a side channel leak, at most it makes it more expensive)

>Or to simply make every request to both the TLS and non-TLS port but do so in parallel and discard the non-TLS response if the domain was in the HSTS cache.

Fails at confidentiality 100% of the time


Whenever a change to an ecosystem / business model comes along and some entrenched interest complains, I think:

"I have no doubt that someone will succeed under these new rules to come. You're just upset that it isn't you any more."


Does this make Firefox's Multi-Account Containers obsolete? I just finished setting the smart cookie-grouping extension up, but it seems like this serves a functionally similar purpose.


Multi-Account Containers remains important for managing "real" cookies used as intended (signed in account information, for instance). These "supercookies" are parts of the web experience abused for tracking and at least partly orthogonal to what Multi-Account Containers helps manage.


Cool, but like... when will they focus on battery life / performance? Every time I stream a call, my Macbook turns into a toaster and battery life goes into freefall.


I have an android phone, using Brave on a Samsung flagship from 2 years ago.

The test at amiunique.org tells me my User Agent string is unique.

So, can we now fix the User Agent strings, please?


There's nothing to fix. The user agent string is a hot mess, but is only useful for server-side profiling, not client-side profiling. Client side, there are a million better ways to profile you, with the user agent string barely adding anything if worked in.



I hope I'm wrong but if this type of tracking is not going to be possible, then more irrelevant ads start showing up and ends up as even more unpleasant experience?


I would prefer to have "irrelevant" ads. They're equally useless, no more and no less. But, they're less distracting.


Is there any movement in tech centered on security/privacy allowing web viewing without relying on cookies and local browser storage?


Sure, in fact im working on such a solution: https://pirsch.io/


Is there any reason to keep the Same Origin Policy after this change? I mean, shouldn't this change defeat CSRF attacks?


No, this won't defeat CSRF attacks.

All this does is create a separate cache for each site, so that they can't infer that a user has already been to another site. It makes no changes to POST/PUT/PATCH requests to an endpoint. They will still be going there.


Okay, I thought it would also keep the browser from sending cookies/authentication data, it received via another origin.


Awesome! How this will work with AMP?

More seriously, what happens with images loaded from a CDN? Or with a site behind Cloudflare.


Does this mean I don't need to permanently browse with Incognito now?

Using uBlock, Privacy Badger, Decentraleyes currently.


Not if you plan on using Google search. They discourage such 'behaviour' by throwing captchas at you after a set amount of time.


Is this the same as the old privacy.firstparty.isolate setting in about:config? If not what's different?


There still appears to be some confusion but, from what I read, FPI is a superset of this partitioning stuff: https://github.com/arkenfox/user.js/issues/930


Good! I don't want cookies or anything that can track my state. Bring back stateless web please!!


I'm on Firefox right now. Does anyone Internet browse on Emacs?


Thank you Firefox team


This is all pointless without a VPN.

Ad company databases map 95% of IP addresses to individuals. You can buy the names, addresses, phone numbers, and email addresses of your website's visitors.


Aren't most people on dynamic IPs? I know my ISP charges an extra $5/month if you want a static IP


I have a dynamic IP and I was curious about how often it changed (I wanted to host some minor stuff out of my house).

So I rolled my own DDNS solution, and have it send me a text every time my IP changes. I have only seen one change in the last six months, and that was when the neighbourhood’s power was cut for two hours for maintenance. Rebooting or temporarily powering off my router doesn’t seem to be enough to force a change on it’s own, I believe it’s only when larger equipment upstream from me powercycles that my IP changes.

So (at least in my experience) the “dynamic-ness” of my home IP is relatively small.


Hopefully it will speed up my Mozilla a bit.


tl;dr Your browser is getting slower because some repulsive companies just can't keep it to themselves. The level of sophistication is far from trivial, linked from the article:

https://webkit.org/blog/8146/protecting-against-hsts-abuse/

"An attacker seeking to track site visitors can take advantage of the user’s HSTS cache to store one bit of information on that user’s device. For example, “load this domain with HTTPS” could represent a 1, while no entry in the HSTS cache would represent a 0. By registering some large number of domains (e.g., 32 or more), and forcing resource loads from a controlled subset of those domains, they can create a large enough vector of bits to uniquely represent each site visitor."


Is this really important given that browser fingerprinting can almost always identify a web browser?


In a parallel reality:

"Firefox 85 Cracks Down on Fingerprinting"

"Is this really important given that supercookies can almost always persist between sessions and across domains?"

----

If you want to fix a problem, there are going to be points during that process where the problem is partially fixed. This only becomes an issue if we're headed in the wrong direction, or focusing on a sub-problem that would be better addressed in a different way, or if we have no plans to fix the other attack vectors.

But the steps we'll take to attack fingerprinting are very similar to the steps we'll take to attack supercookies, so there's no harm in grabbing the low-hanging fruit first.

Supercookies clearly have some value to advertisers and other bad actors or else they wouldn't be used. There's value in closing off that specific tracking method while we continue to try and figure out the harder problem of how to standardize headers, resource loading, etc...


You're right, of course. But let's not forget that fingerprinting exists and is going to be tough to eliminate.


People shouldn't think that this change on its own means they can't be tracked any more, but also this change is worth celebrating -- not all sites use fingerprinting (yet).

But yeah, we still have a ways to go. Small steps.


let's also not forget that firefox has spent the last few years aggressively investing in anti-fingerprinting tech


Absolutely it’s important - Just because one hole is still open doesn’t mean another shouldn’t be closed.

And FF and Safari should continue their work to close any fingerprinting opportunities Fingerprinting is becoming less effective over time - for example fingerprinting on iOS is pretty unsuccessful.


Yes, I agree.

Do you have more information about how iOS is blocking fingerprinting?


While it has some native anti-fingerprinting protection (including automatically deleting third-party cookies every week), the main deterrent is homogeneity: you can be sure that the browser/device is Safari on iPhone 12 Pro Max... and that's it. In other words, unlike other devices where you can get what GPU is in the system (WebGL and Canvas), the resolution of the screen, the list of fonts installed by the user (indirectly, by testing them), list of webcams and sound cards on the system (WebRTC), how many (logical) CPU cores are there (WASM), whether the device has a battery (Battery API), and the laundry abuse of APIs that exists means that it is possible to individually identify desktop users and (to a certain extent) Android users.


I found this:

https://9to5mac.com/2020/09/04/ad-industry-tracking/

"my iPhone 11 Pro was also unique among the more than 2.5 million devices they have tested."

Time zone is one possible fingerprint data point.


> Time zone is one possible fingerprint data point.

Totally forget that. Oops.

Now for the meat of your comment ...and how many have tested their protections so that their testing site recognize that your device is not unique?

A very good counterclaim was posted in the comments:

I strongly disagree with your findings, Ben. Namely, you list fingerprinting techniques available to browsers, and fail to mention how Safari (and Firefox to some extent) make those methods less precise. Instead, you say

Note that this isn’t a comprehensive list, it’s just examples. When a website analyses all of the data available to it, things get very specific, very fast.

So let me point out where you were wrong about Safari in particular:

• Fonts installed. Safari reports very limited subset of fonts, which does not vary. it is the same for every Safari users.

• Plugins installed. Unsurprisingly, Safari lists just one: PDF reader. Native plugins are not reported.

• Codecs supported for video. The uniqueness checking site reported just H.264 and FLAC. Audio format are not reported at all. There's no mention of H.265 and VP9 which work in my Safari beta version, and no mention of the whole plethora of audio formats which are supported.

• Screen resolution is not the real screen resolution. I'm on 27'' 5K iMac and the screen is reported as 2048 x 1152.

• Media devices attached reported as "audioinput" and "videoinput". It has nothing to do with the actual available media devices.

And incorrect reporting goes on.

As you can see, fingerprinting through browser leaves Safari users very poorly segregated. As long as you running latest OS with latest version of Safari, you are a part of a very broad chunk. You can't be identified through browser fingerprinting along.

This means that the only unique data that you can get are: a) Language settings. There is no way to work-around this (unless you consistently lie that you solely use English) b) Time zone. There is no way to work-around this (unless you consistently lie that you solely use UTC)

These things can be predicted anyway with IP address, so it is not perceptibly meaningful in any way. In other words, advertisers can literally give up on detecting when Safari are the browser and rely instead on IP addresses (which can tie into a family (or in some IPv6 cases) a device.


I've got a question; if it is ok to lie in these reports, why do they even exist? I thought these reports were there as a way to introduce client capabilities so that the server can serve the right content.

Disclaimer: This is a genuine question. I'm a hardware guy and I don't know how web works nowadays.


You're correct as to why they exist, but then it turns out that this is a privacy leak. Software is hard.


Very true. Seriously, if everyone is honest we will be in a much better state now :)


Thanks for the response. It looks like just because the fingerprint is unique doesn't mean that it's accurate or stable.


I agree. Visit any one of these many sites that show you what your browser knows about you, it doesn’t matter if using Firefox, the site reveals a tremendous amount of information in your fingerprint. Firefox doesn’t stop any of that, despite a setting that supposedly protects you from fingerprinting.


about:config → privacy.resistFingerprinting


Maybe they should crack down on how awful the UI has become.


make p3p policy great again!


Firewhat?


I wonder if Google will follow suit.


Google implemented this first.


fanastic work, thank you.


These advertising networks are destroying web performance. Most of these "Supercookies" are optimizations to improve performance. By abusing them, advertisers have turned what should be a great performance tool into a liability. I know FF suggests this won't significantly affect most websites performance, but web advertising and trackers are already responsible for a huge chunk of performance issues already.

Of course we'll have the inevitable guy pop in here and talk up how awesome web tracking is because it helps sites monetize better, but that's all bullshit. At this point, all the advertising profits are sucked out of the web by Facebook and Google. The rest of the industry, including publishers are just struggling to get by while two trillion dollar behemoths throw them scraps.


Modern ad networks are essentially the “fax machine flyers” of old: someone you don’t know using your resources and your time, denying you use of your own resources temporarily, to send you something you don’t want. Except now it’s like every “normal” fax page includes 15,000 flyers.


Some pages don't let you "Reject All" cookies, you have to uncheck them one by one, and there's literally hundreds of ad networks listed.

It's spooky, I tell ya!


Use your browser settings to block third-party cookies altogether. And, better yet, install uBlock Origin and never see an ad again.


Doesn’t Google grant itself first-party status by redirecting you through an advertisement domain? µBlock definitely is the king of ad blocking extensions — only the fork AdNauseam (https://adnauseam.io/) can compete, and that’s by both blocking ads and fighting back with obfuscating click simulation.


Hmm. I think it would be better if the extension clicked randomly rather than clicked on all ads. That would cause the numbers to be much harder to interpret, and ad agencies or departments would have a much harder time measuring their efficacy or justifying their existence.


Me too, and it provides a slider for the percentage to click that I kept below 100 when I used it (now I use Palemoon, which doesn’t support WebExtensions, and I use /etc/hosts).


You can change the click frequency in the settings. I guess it would be better to make that setting a part of the splash page that shows up on installation, though, as otherwise many will miss it.


> ad agencies or departments would have a much harder time measuring their efficacy or justifying their existence.

50% of adverts are a waste of money, the problem for people wanting to advertise is nobody knows which 50%


this. this is the way i solve this. They can use all the cookies they want, ublock tends to just eliminate all of it.

Overall FF has been incredibly user friendly making all sort of plugins that focus on privacy possible, while Chrome has been as hostile to it as possible.


> never see an ad again

I wish that were true. Although uBlock Origin does a good job, some ads definitely still make it through. There are also some sites that detect ad blockers and refuse to let you in unless you disable it. There are workarounds for some of these, but it's still a bit of a mess.


If a website doesn't let me in because I use an ad blocker, I respect their decision and I leave the site and find what I need elsewhere.

I have yet to come across a site that offered something so unique or compelling that I decided to turn off my ad blocker to use it.


> There are also some sites that detect ad blockers and refuse to let you in unless you disable it.

That, and when there's an email subscription popup, is when the one-click JS toggle extension comes out. Can't detect anything if it can't run any code in your browser.


It can "detect" if JS is disabled (by loading content via JS) so this doesn't always work.


I agree, although to add, uBlock Origin has an 'annoyances' list that does a pretty good job of stopping detectors.


Thanks for the reminder that I hadn't enabled this in my current browser :)


Unless you get the pop-up from the site that says, "We see you're using an ad blocker. You need to turn it off in order to access our site."

Along with some marketing drivel about how its important advertisers get their ad revenue.


This is when you revoke that website's privilege to run arbitrary Turing-complete code in your browser because it didn't use it wisely.


Please don't. Unless you are willing to pay for the services you are using for free now, ads is what keeps them "free".

You can object to being targeted based on your browsing habits, but don't stop ads altogether.


Eh, I get where you're coming from, but no. The ad industry is insidious and has exploited every means possible to hijack the user's attention: pop-ups, flashing banners, auto-playing videos with sound, inline ads that reflow what you're reading after they take way too long to load, extensions that insert ads, paying ISPs to insert ads, talking to Alexa through your TV...

There is no level these people will not stoop to, and we're sick of their shit. They brought this on themselves.


It's my device and it's my choice what it's allowed to load and display to me. It's not my responsibility to make sure someone who provides their service for free earns money from shitting into my brain. Implied contracts aren't a thing for me. If you want to make sure you get paid for your service, put up a paywall.


Too late. They had their chance and blew it.


Ad networks had their chance, it's done now.


I seem to always have this handy snippet in my dev tools history:

document.querySelectorAll('input[type=checkbox]').forEach(el => el.removeAttribute('checked'))


https://github.com/oblador/hush if you use safari :) Basically the regulations say that if a user doesn't respond to this popup, by default all the cookies are rejected except the ones the site needs.

This app hides the popup :)


Sites can set as many cookies as they want. I have installed temporary containers (that is a Firefox only feature sadly), 15 minutes after I close the last tab in that group all those cookies are automatically deleted.

Each tab group then has its own cookie container, so I can have multiple groups open and they don't share anything - I can login to different google (or any other service) accounts in different contains and it works like I want it to.

For the sites that I want to use logged in, I either create a special container for that site only, or I just use a password manager to log me in each time I need to visit it.

The added privacy is great, the peace of mind in just clicking I agree is great.


How do you circumvent browser fingerprinting? If every container has the same user agent, canvas, screen resolution, JS benchmark test results, etc. then no matter what but you are uniquely identified, bingo!

I really feel today having different devices with different browsers, connected to different providers is only working solution.


I wonder how many people reading this comment are thinking, "what's a fax machine?" :o)

I like the analogy, but I wonder how effective it is on anyone under the age of what, 35?


I'm old enough to remember (like New Coke) ZapMail by FedEx where you would send documents by FedEx and FedEx would Fax it on their equipment to a location near the recipient for physical delivery. Obligatory Wikipedia article: https://en.wikipedia.org/wiki/Zapmail. Hey, most businesses didn't have one of those newfangled FAX machines.


Newfangled? It will enter its third century in a decade or so.


So just fangled


I think it's sufficiently geriatric to be considered oldfangled.


You might be surprised how many of us under-35s still have to use Fax machines on a regular basis ;)


Especially anyone who works in law, government, banking, or healthcare.


I love that some of the most-sensitive information users are the ones hanging on to a completely-unsecured transmission method. Sure, tell me again about all those HIPAA and SOX requirements when we still have fax machines.


You have to keep in mind the time when these requirements to "use fax" for security came about.

There was a single telephone company (or it was shortly after there were plural "baby-bells") and the telephone network was a completely private, isolated, network that only the phone company (or baby-bells) even had access to. It was also a network that was heavily regulated such that the possibility of a random attacker from half a world away being able to tap into a phone call as it happened simply did not exist.

In that environment, placing a phone call was considered "secure" (or at least as secure as network isolation and regulation could cause it to become [I'm ignoring NSA style 'state secret' taps, those have likely always been available to NSA style agencies]). So it would have been seen, at that time, as reasonable to use fax machines for document exchange, because the "phone network" was considered to be secure against having a man-in-the-middle tapping off one's communications.

Wind the clock forward thirty years, and have the once isolated and highly regulated telephone network more or less become just another packet protocol on the general Internet, and the choice of using "fax" for secure document exchange sounds ludicrous.

The issue is that the regulations those environments operate under have not been updated in the ensuing thirty years to account for the fact that "phone network" is no longer the once isolated, mostly secure, network it once was. And if the regulations don't get updated, no lowly clerk at the front lines is going to lose their job by _not_ using the comm. system called for by the regulations.


or Microsoft


Counts as government.


Especially in Japan.


> You might be surprised

This is such a meaningless statement.


This below 35 yr old discovered a fax machine. You won't believe what happened next!

Better?


I had to use a Fax machine in 2018. In the United States. As the only acceptable way to submit certain documents.

I should also point to non-Unitedstatians that checks (that physical paper worth as much money as you write and sign on it) are still in use in the USA.


I still use checks because in the US there are certain things you can't use a credit card for, e.g. loan payments.

I pay contractors with checks because almost none accept credit cards and cash gets cumbersome once you start getting into 4 and 5 digits.

My local utilities all charge a "convenience fee" of a few dollars when paying online or with a credit card. Sending a check in the mail costs me only $0.50. (Even though it costs them some employee's wages to handle my envelope and cash the check. Go figure.)

Checks are also convenient for transferring small amounts of money to friends and family. Yes, there is Paypal and the like and some of them don't even charge fees but I trust my bank way more than I trust a random company with direct access to my bank account. (Paypal in particular have proven over and over again to be untrustworthy in this regard, which is why not only do I have two Paypal accounts--one for buying and one for selling--but I also have a special "firewall" account between PayPal and my main checking account. This is so that the most they can grab is a couple hundred dollars on average, rather than some arbitrary fraction of my life's savings.)

Checks are sometimes the easiest (or only) way to move large amounts of money between my own accounts. There was a time where most online bank accounts would let you make ACH ("electronic checks") transfers to any other account, but they seem to be moving away from this, I presume due to its high use in fraud.


what on earth is banking doing over there in the US? I wouldn't know how to write a cheque these days if I wanted to, and the only cheque I've seen in the last 10 years or so is from my (now deceased) grandmother in-law sending birthday money to my wife.

I'm guessing this is why several US payment companys and start-ups just don't make any sense to me: "make payments easier!"

but it's hard for me to understand how to make it easier than just typing in someone's phone number or email and sending them money, or purchasing via tap and go with your card/ phone. Don't you at least have electronic transfers if not those other newfangled technologies? are you (seriously) suggesting you can't transfer money between your accounts?


Banking in the US has first mover disadvantage.

Because of how and when it got computerized, it's hard to move it forward again. There's no desire for sweeping changes, everything has to move slowly now.

There are several personal transfer services (PayPal is ancient and fits the mold), but none have a lot of penetration. I think Zelle? is deployed through bank integration, and may end up with a lot of users as a result; possibly critical mass.

There was a lot of backlash on rf payments the first go round, a few issuers gave me cards with it, but then they removed it. Then they started issuing cards with chips, and now most of them are putting rf payments back in. A lot of payment terminals have the hardware for it, but a lot of them also have signs that say don't tap to pay.

I can easily do electronic (ACH) between my accounts, as long as I've gone through setup, which takes days for test deposits to show up. But to transfer to a friend or a relative is tricky.


The first digital computers were used by banks within years of each other - 1955 for BoA in the USA, 1958 for BNP in France and 1959 for Barclays in the UK. And those machines merely took over from existing calculating systems that had been in place for a good couple of decades.

US banks suck for a lot of reasons but part of it is that culturally and regulatorily the entire financial/banking/commercial environment in the US is very conservative. And there's not much in the way of pressure to make changes either - whether internally from competition and regulation or externally from the need to interact with other countries. Like broadband, consumer banking is basically an oligopoly that will quite happily plod along providing the same service as long as it can.


Can't you use a wire-transfer or an ACH transfer? All those use cases are easily solved with electronic transactions in most of Europe, Asia and even in Latin American countries like brazil. They are usually inexpensive or free and instantaneous.


ACH in the US is not simple to use. Companies that accept ACH payments are using a payment processor that comes with a fee (usually less than credit card fees); contractors aren't going to set that up. Consumer to consumer transfers built on ACH have increased in the last couple of years, but with low limits, inappropriate for contractors, and generally with terms of service prohibitting business use. It's easy to move money between my accounts with tools based on ACH, though. There's nowhere at my bank where I can say send $x to a routing number and account number, it takes a bunch of setup work.

Wire transfers are expensive here; my credit union which doesn't generally have high fees, charges $29 to send a wire (they don't charge for incomming wires, but some banks do). I've had some brokerages with free wires, but usually that's tied to a balance requirement or in connection with a company sponsored account (for stock based compensation or retirement accounts).


In the US, wire transfers can incur fees for both sender and recipient. ACH is more often used by medium to large businesses transferring money from or to consumers, but the ergonomics are pretty bad for one-off person to person transfers, to the point that if you hire a plumber who owns their own business, they'll probably accept check, and sometimes accept credit cards.

The US does have some electronic networks for instant, no-cost p2p payments. https://www.zellepay.com/ has a large number of participating major banks with some major exceptions. A lot of people use https://venmo.com/ or https://cash.app/ which are not directly integrated with banks but then offer electronic transfer of funds to bank accounts.


Australia too. Electronic transfers here are free and instant. When I used to rent I just set up a recurring payment through my bank’s website (free and easy, with any bank to any bank). Now my mortgage gets taken out each month automatically via a direct deposit authorisation. (ACH equivalent).


Wire transfers from my bank in the US require me to call up and make a request.

Writing a check is the fastest way for me to transfer between two accounts. :(


New Zealand is phasing out cheques.

Many shops don’t accept them, some banks have already stopped using them altogether, and the rest of the major banks are phasing them out this year.

A cheque is a rare thing to see (I haven’t handled one for a decade or so?)


Checks still in use in Canada as well. I had a person today tell me they had 3 checks stolen and cashed and my response was “people still use checks?”.


The demise of checks is greatly exaggerated. I've written about 70 or so checks in the last 5 years. Mostly: Property taxes, home improvement contractors, dues for various clubs and social groups, kids activities, and some mail-in retailers who simply don't take credit card.

That's leaving out the "automatic bill pay" function of my bank's web site, which, for most payees, at the end of the day results in physical paper checks being printed and sent in envelopes.


> The demise of checks is greatly exaggerated.

That varies a lot by jurisdiction. I'm 50 and I haven't written a single check in my entire life. (Sweden.)


Yeah, only cashed in checks from US and U.K. Each time I need to find out how to do it! Think three checks in 40years ain’t bad


I have only ever paid once with a check here in Denmark. I won't ever do it again, because no bank that I know of will issue a paper check.


In Canada it's spelled cheque, I have no idea why. </pedantry>


Still in us but much less than the US. Interac bank transfer has cut on a lot of that usage.


I maintain a legacy service at work (I originally wrote it back in 2014) that is responsible for sending eFaxes from our various other services and platforms. It's one of the most internally trafficked services we have. We're in the healthcare space. Almost every document created on our different platforms results in a fax being sent.


For the SS-4 form, to get the Employer Identification Number, you have to either make a phone call (fairly long, half na hour in my case), send a fax (and get the EIN in 4 days) or apply by email and... wait 4-5 weeks! [0]

[0] https://www.irs.gov/instructions/iss4


A small business owner I used to work for got sued for fax blasting people when the marketing company he hired was sending out some 2K faxes per day to unsuspecting business owners.

I still laugh about how he got several cease and desist letters and still continued sending the same businesses stuff.

Ahhhhhhh yeah, the good old days.


I’m in the age group you mention and although I’ve only used a fax twice, I can totally understand the analogy.


Easy enough to just use "text messages" since it was not very long ago that you had to pay to receive them but had no ability to block them without disabling them entirely.

At least for those of us that were late adopters of text messages.


This also depends on where you're from - I had a cell phone for the past ~20years and only learned that you pay for receiving texts in the US when I first visited, ~9 years ago.


My bank still accepts fax documents. All I would have to do is find a fax machine ...


> My bank still accepts fax documents. All I would have to do is find a fax machine ...

On linux you can use [efax](https://linux.die.net/man/1/efax) and a modem and... ooops, good luck finding a modem.

I did this for real ~10 years ago when a stupid company didn't accept a scanned PDF by email and required a fax of the actual document "because security". The difference is that I had a modem in an old laptop at that time, so I just send them the same scanned PDF.

Now I'm wondering if there is a provision for sending faxes somewhere in the GSM/3G/4G rabbit hole of standards.


I’m not sure if it required anything from the network, but my Siemens C35 could send faxes


GSM yes, but once phones went digital that capability was lost.


GSM phones are digital and include a special FAX mode.


One of my previous employees had a Kofax server with 6 ISDN lines for faxing. D in ISDN stands for Digital.


Try this online fax service: https://www.faxrocket.com/#!/start

I have bookmarked them from long ago.


Unsurprisingly Equifax requires you communicate with them through snail mail, fax or a telephone call.

Every other credit agency had no problem with my SSN + address then Equifax throws a flag, locks my account and says I have to validate my identity by faxing them identity documents.

Fat chance, idiots.


There are online services that let you upload a PDF, and they'll fax it for you.


Even in the 90s a lot of folks didn't send a "physical" fax, you could print it thru your modem. Or something similar, memory fuzzy, only did it once I think.


Or going to a shop like Kinkos or a local print/copy shop. They offered sending/receiving faxes or FaaS before _aaS was a term of "endearment".


In the Apple store you can find apps that send fax to a physical location. That's what I used the last time I had to send one.


There are quite a few multifunction printers with fax.


That assumes a land line to plug the fax machine into.


We have two.


Pharmacy, nursing, and medical students will find out soon enough.


I'm 21 and I know what a fax is.


i started working at a place that would get stacks of flyers across the fax machine every day , they would just toss them in the recycling bin all the while wasting tons of paper and ink .. i started calling all the removal numbers and got it down to zero .. they thought the fax machine was broken haha.


This is a terrible analogy. Nobody forced you to go to the website that voluntarily decided to include the trackers.


I can't tell if this argument is meant seriously but it is incredibly specious. If every single website operates in this fashion and modern life is nearly impossible without them, then consumers are presented with no option and it amounts to coercion.


It’s inconvenient for me to have to pay for things, that doesn’t mean stores are obligated to let me shop for free.


If that website doesn't ask me if I want to allow those trackers then it's forcing them upon me without my consent. How am I supposed to know if a website has a tracker before I visit it?


They also don’t ask you if you want whatever specific content they are rendering that day to be rendered or not. Caveat visitor.


It is still not a nice thing to do. Can we at-least agree on that?


If people didn’t want ads it wouldn’t be a multi billion business. Also your analogy is wrong. Your browser won’t execute code unless it requests it. Fax machine spam you don’t have do anything except have it connected to a live telephone connection.

It’s more like complaining that your sole of your shoes is being worn out more because grocery stores put the milk in the back forcing you to walk past items you don’t intend to buy. You can always go to a different store just like you no one is forcing you to browse websites that are ad supported.


People don't want adds. how do you infer that people want ads because the ad industry is profitable?

It's profitable because a few people want to influence and spy on a lot of people.

Most people don't want ads they just tolerate them for getting actual services. Most these people don't even know how much tracking is involved and how nefarious this industry really is.


Why do they work? Why can I go start a business and scale it to millions of paying customers by using ads?


Because there are businesses and politicians willing to pay through the nose to get their message in front of those that they want to influence, and you are then the middleman that gets our irritation and ad blockers and pushback for contributing to the proliferation of the most invasive, unscrupulous segment of our entire society.


Because you’re infecting people with mind-viruses to force them to buy your crap. Advertising is about exploiting human psychology, taking advantage of people with weak impulse control, and outright lying.


Often something is a multi billion dollar industry that people don’t want!

Perhaps you’re invested in the ad industry. No one else wants ads buddy.


Not in ads but I know they are quite effective. Most startups can attribute their growth to the effectiveness of digital advertising. Robinhood was driving app installs for $10 each while E*TRADE and Ameritrade were paying $1000 per customer.

Most of that VC cash startups raise is spent on marketing. I don’t understand why people have such negative perception of ads especially on a VC run news site. All the ycomb companies drops tens of millions on digital advertising.


Thanks for your considered response to my snarky comment. Personally I don't mind plain ads which are simply a commpany advertising their product/service on a billboard etc, what I find unbelievable in the digital era, is the extent of the intrusion now on finding out everything about people to market to them better (which I'm not even sure if having more data on someone has been proven to produce better results). Its really the data broking side of it - if that went away I'd have no issues.


We're trying to build an ad network that doesn't track users: https://www.ethicalads.io/

We talked a little bit about how these ads still work, even without tracking you. You might be losing 10-15% of revenue, but if you never had that revenue to start with, you don't miss it: https://www.ethicalads.io/blog/2018/04/ethical-advertising-w...

I think the real secret is just to not become dependent on the additive revenue. All businesses forgo additional revenue based on ethics and regulation, and I don't understand why that's such a odd thing to do with advertising.


I appreciate that you're trying what you're trying, but I wanted to address this:

> All businesses forgo additional revenue based on ethics and regulation, and I don't understand why that's such a odd thing to do with advertising.

The great bulk of advertising is built upon a conflict of interest and is essentially manipulative. Consider, for example, an article. Both the writer and the reader want the reader's maximum attention on the article for as long as the reader cares to give it. The goal of advertising is to distract from that in hopes of extracting money from the reader. Generally, ads are constructed without much regard to whether the reader was intending to buy or would really benefit from the product. The goal is to make a sale. (If you doubt me, look at how many people who create or show ads, say, test a product before putting the ad in front of people. Or just look at tobacco advertising, a product that has killed hundreds of millions.)

So I think there's an inherent lack of ethics to ads as an industry. It could be that you'll find enough people who are worried about privacy but not about the other stuff to build a business. But I wouldn't bet on it. It's no accident that this security hole is being closed not because of random miscreants but because of industrial-scale exploitation.


These are some of the only ads I see online these days (on Read the Docs, mostly). I don't use ad blockers, but I do use tracker blockers, and those block pretty much all ads, for obvious reasons. Not these ones, though. And that's how it's supposed to go.


Which ones? I think I would like to try your setup, since it sounds like a good compromise between having my data harvested and being kicked off of sites for blocking their ads.


Can you elaborate on your setup?


I'm not OP, but I have my browser setup to block trackers only, nothing that's billed as an ad-blocker.

I use Firefox with Strict Enhanced Tracking Protection [0] and Privacy Badger [1] as an extra layer of protection. Some sites, mostly news orgs, complain that I'm blocking ads, but inevitably these are the sites Privacy Badger reports 20+ trackers blocked. I'm happy to see ads online, I'm just not willing to sacrifice my privacy for them.

[0] https://support.mozilla.org/en-US/kb/enhanced-tracking-prote... [1] https://privacybadger.org/


Why not just block ads, too? Do you really think advertising is ethical at any level? Because I do not. If I want to buy something, I seek it out. Anything else is like junk snail mail: a waste of my time and your money.


The short answer: I am not anti-advertising, so I don't block ads that respect me.

I don't love advertising in many of it's forms, but taken from the viewpoint of those who make money from ads (i.e. content creators), it is one of the best ways out there for them to make a living. Platforms like Patreon are great for some folks, but not everyone can make a living off of sponsorship from their viewers. But, I am not willing to sacrifice my own privacy to allow someone else to make money, especially given that we have tonnes of examples of non-privacy-invading advertising that works.

I listen to 8-10 hours of podcasts a week and I generally find the ads on them, usually where the host does an ad read and includes a discount code, to be far more useful and relevant to me than the hyper-targeted ads backed by 20 tracking scripts I see on news sites. Another example, many of the indie tech news sites I read (e.g. Daring Fireball or Six Colors) will have a weekly sponsor that will have an advertising post or two interspersed with their regular content. I'm happy to take 2-3 minutes out of a 30 minute podcast episode to listen to a couple ad reads or see a brief write-up of a sponsor's product as I'm scrolling through the week's tech news. What I'm not happy to do is have my web browser load a dozen tracking scripts in the background when I open a news article and have flashing pictures deliberately trying to distract me from what I'm reading.


Presumably if your adverts don't do tracking, they don't need to slow page loads down the way current advertising does either which should be a big plus.

Fundamentally serving an advert should be a light process adding only a tiny amount of overhead to the site.


Yea, we are planning to do a blog post on it, but the total overhead is in the 10's of KB. Just a single JS file, and an image. All open source: https://github.com/readthedocs/ethical-ad-client


It would be nicer if no tracking would mean no data sent to ethicalads unless user engaging with ad because we know what happen when we trust advertising companies. So a step in the right direction but i would still block impressions when hosted on another domain. Also ads should not distort users perception in order to sell but that's another debate.


We support a backend API, but it's much more complicated to implement, and the client gets more complex as well. We started out with a vision of all backend integrations, but it was impossible to sell to most publishers.


To me, there is no such thing as an “ethical ad”. You are trying to steal my attention, my time. You don’t get to do that. My time on Earth is limited and you don’t get a millisecond of it if I can help it.

If I want to buy something, I seek it out. Anything else is a waste of my time and a waste of the advertisers money.

I long ago decided to throw out every piece of physical ad mail I receive without even glancing at it more than long enough to recognize it as an advertisement.

I don’t know why you expect me to treat your digital ads any differently?

You can call my perspective extremist, but is it any more extreme than the methods used by advertising networks to steal my attention?


I'm pretty frustrated by advertising too, and some of it is particularly egregious, but at the moment, there is really no other way for many publishers to get paid.

I'm curious, how many services do you subscribe to and pay for content? I pay for a few ad free resources, but certainly a lot of the sites I enjoy don't get my $$.


Just as an example: I read some dev newsletters, they include a block of paid-for job postings. Highly relevant with the content. Together with the occasional sponsored post link (also still relevant content) this appears to fund them just fine.

This isn't "ad free" but it's close enough in my opinion. There's a huge gulf between contextually relevant content curated by the creators and the kind of shite that ad networks push.


Great morally charged argument, but I am not sure how you expect content creators to monetize.


Well, to be blunt, in my humble opinion more than 90% of "content creators" trying to monetize their content with ads currently produce content of so low quality, that world would be a better place without their content. So if my ad blocker helps any of those to change their career, I am happy.


Even with ad monetization, if their content is of no use, they will disappear if they survive on said ad monetization. Content creation on the web is hard enough to suffer from removing the major monetization avenue without a suitable alternative.


Charge money, either directly for the content, or indirectly in the form of patronage or a service or other business you run.


Yep, these exist as alternatives as of now, but require an explicit payment step which might cause more friction than ads.

I think a service that allows for website usage based payments, a Spotify/Apple News for websites would be interesting. I can see a decentralized crypto application evolving around this usecase


> Yep, these exist as alternatives as of now, but require an explicit payment step which might cause more friction than ads.

Payment from one user produces more revenue than showing ads to hundreds of users. That should be multiplied in to any analysis of friction.

> I think a service that allows for website usage based payments, a Spotify/Apple News for websites would be interesting.

There have been many attempts to do that, none of which have succeeded. One major problem: they tend to track all your web activity, and the kinds of people interested in services like this are very much the kinds of people who don't want to be tracked. Another problem: it's easier to convince people to pay for a specific source of content than to amorphously pay for "various content".


That's not really my problem, is it? It is the content creator's problem.


This is great. How are the ads paid for though - is it billed per click or per impression, or is it billed per an approximate amount of time the ad will be displayed for?

The problem with charging per click or impression is that you're vulnerable to fraud which means you either lose money/trust or you have to do invasive tracking to detect & prevent fraud (which you'll be unlikely to achieve as well as the big players - Google & Facebook - do). Charging per amount of time (regardless of actual impressions or clicks) doesn't have that problem.


We are doing CPC & CPM pricing. I don't believe anyone has asked us for "time seen" pricing. I don't even really know how that would work, and why it wouldn't be open to fraud in a similar fashion.

Do you have a good example of how this is priced, and how it would work in practice?


By time seen I don't literally mean time displayed on screen but more like TV/radio ads, as in this ad will be part of our rotation of X ads for an entire months across X publishers. I think The Deck used to do this.

Determining the price will be a bit tricky (and I would expect that you'd have to lowball yourself until your platform builds credibility in terms of good ROI) but in the long run it should mean your advertisers pay a flat price to be included per week/month regardless of actual impressions or clicks (thus there's no fraud potential as only the raw profit from the ads will matter - the only "fraud" potential would be to literally buy the advertised product en masse).


Gotcha, that definitely makes sense. We are looking at doing that for some of our larger sites, similar to Daring Fireball: https://daringfireball.net/feeds/sponsors/ -- which I believe is based off the old Deck model :)

Thanks for following up.


I totally read it as ethical lads.


And I read your comment as "ethical ads" and was wondering what I got wrong from the GP comment, ahah.


I think the big problem in adtech isn't just targeting, it's also fighting ad fraud. Do you have a good plan for when you become big enough to become a target for ad fraud?


There is no such thing as an ethical ad.

Advertising is a cynical deployment of our knowledge of crowd wisdom, media manipulation, and statistics to make people part with their money for things they wouldn't think they needed. Our economy can't handle this kind of reckless consumerism anymore.

Worse yet, we don't need advertising to bolster our media. Unfortunately, the media execs don't realize this yet.

All your metrics are fuzzy, your standards ridiculous. We have far better practices we can deploy than the ones the advertisers use.

Please, stop advertising to us. If that's all you plan to do with this new company, can you please kindly go away?


I am really confused by this position.

How do you propose that companies should promote their products and services, if not through advertising?

Are you somehow suggesting that they should just sit there and hope that people who have never heard of their product independently decide they happen to want or need that product and seek it out, unprompted?

You say "people part with their money for things they wouldn't think they needed, Our economy can't handle this kind of reckless consumerism anymore": Surely you don't think you speak for everyone?

You certainly don't speak for me.

I am not some blind sheep who is suckered into buying things I don't need. I am a grown adult who can make informed decisions with my money, including sometimes buying frivolous or unnecessary things.

I hate these arguments that assume everyone is stupid except for the person making the argument. It feels like there's some weird savior complex at work.

People have free will and are allowed to spend their money as they wish, and I think YOU are the cynical one if you think otherwise..


> Are you somehow suggesting that they should just sit there and hope that people who have never heard of their product independently decide they happen to want or need that product and seek it out, unprompted?

Yeah, it's even got a name: shopping.


> Yeah, it's even got a name: shopping.

So, for direct to consumer companies who only ship online, SEO?

Here's the thing: ads can be useful.

Awhile back I got a, highly targeted, ad for high protein sugar free cereal. That's awesome! I am 100% the target audience for that product, and until I saw that ad I had no clue it existed! To find a product like that I'd have to search for it, but I would never search for an entire new category of product that I didn't know about.

Same thing for the fitness app I am using (BodBot, it is amazing!). I am quite literally healthier right now because of a targeted advertisement.

Was I aware of fitness apps before then? Sure. But the ad for BodBot was informative about what features differentiated it from the literally hundreds, if not thousands, of other competing apps.

Do most ads suck? Sure. Should ads be highly invasive? Nope. But interest tracking and basic targeting actually help me find products and services that I want to buy!

Facebook in particular, for all the things wrong with it (long list!) has some amazingly relevant ads that inform me of products that I never knew about.


So let's designate .biz as the place where advertisements live, and turn it into the online yellow pages (plus all the other scum to be expected) and ban anything resembling advertising from every other TLD.

Those who want to shop know where to go. Those who don't, know where to avoid.


hey man, that's great and i'm happy you're healthier because of advertising. My experience has been the opposite (yes, advertising making me and my family UNhealthier -- mentally and emotionally). I don't want targeted ads, but I can understand that you do.

Perhaps there is a way we can both enjoy the internet in our preferred ways. Perhaps not, I don't know.


And how do you know about the existence of a product to go shop for in the first place, if not through advertising and promotion?

Or do you have infinite time to go browse every single store in your city on the odd chance that you'll see something you want?


There are these wonderful things now called search engines. And they existed before online advertising was tied to search, so before you say search engines would not exist without advertising attached to search queries, think again.


How do you know to search for something if you haven't heard about it before... via some kind of... promotion?

I get the point you're making, but I hope you realize you're just backpedaling from your original "no advertisements ever!" statement.

I believe in giving people better control over how they receive ads (I personally run an ad blocker in my browsers and a Pi Hole on my network), but you position that there should be no advertising at all, and that all ads are unethical is just silly, and you're proving that point yourself here..


It must be hard to be this naive ^^


I assume you meant to reply to sbarre


You’re proving my point.


Look at how much of a web you have to spin for yourself, just to conclude that it is indeed fine to have others tell you what you need to buy.

The false dichotomy you pose is ridiculous. People seek out information on what to buy all the time. But when I am listening to music, watching television or film, or reading a fucking news article, that is not the time I want to be given that information. It is unsolicited and I don't care about it.


Hey, I'm not sure why you're answering me from 2 separate accounts (I know it's you because you accidentally(?) replied to another post from your other account with this one, speaking in the first person about the other message).

People seek out information on what to buy because at some point they found out about it via (perhaps indirectly) the provider's promotional efforts (i.e. advertising).

We can certainly talk about the appropriateness of when an where to advertise, but that's a very different topic than your ALL ADS ARE UNETHICAL screeds that you've posted in like a dozen threads on this topic from 2 different accounts..

Never mind continuing to believe that somehow those of us who engage with advertising are lying to ourselves or somehow less in control of things than you?

Now who's spinning a web..


I like what that guy is doing, but I still have to agree with you. To me ads are just money focused propaganda, abusing human psychology to make people spend money they on crap they don't need.


If you'd like to suggest another way to make OSS sustainable, I'd be all ears.

A bit more color here: https://www.ericholscher.com/blog/2016/aug/31/funding-oss-ma...


Rather than open source, let us return to Free Software. The point of our labor is not to ensure that we are paid; it is to tear down the systems which create inequality and scarcity in the first place.


You've obviously thought this out extensively and decided to advertise. Who am I to offer a better solution? You know your business domain, revenue needs, etc better than me or anyone else.

However, that does not mean I have to agree to advertising -- whether it is labeled ethical, green, sustainable, cage-free or whatever. If you're lucky, you won't have a lot of extremists like myself visiting your site; i.e. the advertising will be successful.


I'm working on https://snowdrift.coop for that.

We could use help, particularly from anyone who's good with css.


there is no such a thing as an ethical comment.

Comments are a cynical deployment of our knowledge of crowd wisdom, media manipulation, and statistics to make people part with their opinions for others they wouldn't think they agree with.

--

Sorry, there is such a think as "more" ethical ads. If you want to be pedantic and argue they should use "more" suit yourself. But things are not black and white, your comment is in itself "manipulating" the reader trying to convince them that ads are all the same and that they cannot be put on an ehtical spectrum which is not true: tracking ads vs billboard, I'd much rather a billboard (which I hate in and on itself as they are usually just making the place they are in uglier).


I'm not trying to manipulate anyone. I'm voicing my opinion. I don't buy anything from advertisements. Period. When I need something, I shop for it. And if you think I'm alone, you're kidding yourself.


That may be the case but you can’t discount the possibility that when you are shopping for something, your choices are influenced by advertising that you have previously been exposed to whether you are aware of it or not. Your decision to go shopping for something in the first place may be influenced by it too.


that's totally fine. I prefer a world without advertisement, ideally. I disagree with you that there is no spectrum of ad ethics.

And while you are not "trying" to manipulate anyone (maybe), I also disagree that you are not effectively influencing your reader thoughts to some degree.

The analogy I made is: even an internet comment does, on a smaller scale, less maliciously, use persuasion techniques: should we get rid of discussion forums too? I don't think so, and while an ad-less world seems like a nice experiment, sounds pretty unrealistic, regulating (outlawing would be nice) tracking in ads? More realistic and fixing 80% of what's wrong with 20% of the effort if you ask me


You're right that an ad-less world is impossible. Advertisements existed before you and I were both alive and they will exist when we're gone.

But that does not mean I have to partake in them, watch them, or allow them to consume my attention and time. I also don't need to spend my limited time on this planet trying to "fix advertising". I can simply block them and ignore the ones that slip through, and get on with my life. If this is an issue that is dear to your heart, that sentiment undoubtedly feels dismissive. I'm sorry about that.


I run a network ad-block dns (pihole) and consistently 25-33% of all my network traffic is blocked as ads. It's much more than I ever imagined. Now I'm used to a different internet, when I'm using internet off the network it's like WTF is this?


25-33% of requests? Or is this a percentage of bytes?

Because I wonder what percentage of bandwidth (in terms of bytes) trackers/banners/ads account for.

Need to set up a pi-hole ... just too many other projects....


It's a percent of DNS requests. It might be quite difficult to see what percentage of bytes it translate to, since HTTP requests aren't actually sent.

My pihole is showing 18.7%-23% of requests blocked :)


Pi-Hole is a DNS solution so it's just blocking DNS lookups. Mine is currently blocking 43.9% of all DNS requests.


I set it up recently. It's about as much effort as buying and setting up a new laptop with ubuntu if you get a kit. I'd imagined it as a project beforehand, but in reality it's super easy and trivial (assuming you're comfortable using linux and ssh at a noob level).


Yep same. I'm a mere web developer that mostly works on Mac. Getting Pihole setup only took me like an afternoon after having an RPi sitting around doing nothing for months. They make it really easy, just follow the instructions. Also I'm lucky enough that my router has a friendly interface where it's easy to set the router DNS to pihole.


How would you measure bytes if the requests are blocked?


Some people are good at thinking a process through to the end. Others are not and ask questions at the first unknown. It's a large part of why I'm not a teacher.


Run the same requests throught different end points. Each through pihole & unfiltered, while monitoring the traffic on both.


Doesn’t that defeat the point of pihole? Though I suppose if what you want to do is measure things it makes sense.


Yeah, that's what I was thinking. Just to get a sense of how much bandwidth I am saving by in fact blocking the request.

It would be nice to know how much we're wasting.


I just have a decent set of ad blockers and the experience is similar. Unfortunately, it often results in weird experiences or I get sites which don't work at all if you have ad blocking.


Well that's interesting. For me 99.5% of websites work perfectly using ublock origin. The only .5% remaining are websites that actively refuse to serve any kind of adblock users, not because it breaks functionality on their site. I don't think I can recall having visited a single website that would have features break unintentionally because of ublock in the past few years.


Can't agree with you, Dynamics 365 is one of them (it's shit but I've implemented it at work). EDF (French main electricity provider) also breaks for me. That's one example from a big company, and one example with a big user base.


The Denver Post just lost my business over this. They have one of those things that scrambles all the words for any user with the audacity to not want to see video+audio ads while reading their newspaper.

Is it their content to do what they want with? Sure.

Does the same logic apply to the $9 I used to give them each month? You're damn right.


Hard agree. If I'm paying for content, I'd accept a small number of discrete advertising. Video advertising on a text/ photo site pisses me off in general and if I was paying for it? No chance I'd let that fly.


I don't think my experience is vastly different from yours. I do get some sites where pop-overs or cookie notifications are blocked but it's not clear and you just can't scroll. I could turn off those blocker settings, but the notifications are annoying enough it's worth it.


It depends on how many privacy lists you have added, probably.

Normal display ads all being blocked is generally fine 99% of the time, but if you care about not being permanently tracked across the internet then there are a couple more domains you have to add - except some sites make it mandatory that those invasive fingerprinting scripts and port scanners run and report back a session, otherwise you're refused login or banned.


Pi-Hole/NextDNS also blocks adds in most apps. I used NextDNS (which has a limit on the free tier), and recently switched to pihole running on my home server. I also use ZeroTier to connect to my server directly even when I am not on my local network to still use it as the DNS server. Works great.


You can get even better coverage with the NoTracking lists (dnsmasq/unbound or dnscrypt-proxy) https://github.com/notracking/hosts-blocklists

They focus not only on tracking but also malware prevention, where possible via dns filtering.

Pi-Hole still does not properly support wildcard filtering, only via regex but that is not really efficient (requires tons of resources).


I paid them, $20/yr is quite good and I can add my parents' house, my in-laws' house, etc on the same plan and manage them all centrally.


Question about pihole: is it possible to turn off blocking for a website? Do you have to log into the pihole web interface to do that? I often go to websites where some crucial functionality is blocked by my adblocker (ublock origin), where I have to turn it off for that site.


I map the command below to a keyboard shortcut to disable all pihole blocking for 60 seconds via the pihole disable API call.

wget --quiet "http://PIHOLE_IP/admin/api.php?disable=60&auth=YOUR_API_TOKE..."

You can find the token in the pihole Web GUI at, Settings > API/Web Interface > Show API token


I just stop using sites that gimp themselves when I use an adblocker. There's tons of alternatives for most things.


It is not what GP asked.

An no, there isn't "tons of alternatives". In theory there is. But in practice, they can really make your life harder. Some may say that Signal is an alternative for WhatsApp, but if people you communicate with don't want to use anything but WhatsApp, then Signal is useless. I hate Facebook but when I want to plan an event, I found nothing better, simply because that's the platform that reaches the most people. Network effects... But also, your favorite show may not be on "alternative" streaming platforms, sometimes your job, or worse, the government may require a specific website.

There are extremists who are ready to find alternative friends, shows or jobs just to avoid using some website. It is a good thing these people exist, that's how progress is made. But for most people you have to make compromises.


> I hate Facebook but when I want to plan an event…

Ah! That's why I haven't missed Facebook. I am old enough that I don't plan events any longer.

(Or maybe I have no social life. Actually, that's right, I don't. ;-))


There aren't always alternatives - think shopping for certain items, government forms.


Why would a site that hopes you'll send them money in exchange for product, refuse your traffic if you have an ad blocker enabled? That just costs them money. Same for government forms, why would they refuse your traffic if you're blocking ads?


It might not be intentional to break the site experience for adblock users - but there is a number of sites that has implemented link tracking in a way that overrides the normal click (though sometimes not keypress) events, to let the tracking code do its thing. If the tracking code is blocked or fails to load, that means a lot of actions break.

Best part? Trying to convince the operators of such sites that users they cannot see in their "analytics solution" are worth fixing their site for is not exactly a straightfoward job - from their narrow view, these users simply do not exist, because the tracking does not show them!


I wonder, too. Yet I still see these issues.


You have adds on your government forms?


Ad blockers have false positives.


How do you use the web when you can't click on links?

I can't effectively keep a mental black list of all the sites which I don't want to click on.


I don't. I mean, if it's a news site just search for the title in a search engine and you'll find other articles. If it's a web application I search for an alternative and bookmark that. If you really want to avoid even loading it, you can just block the whole site with your adblocker but I don't go that far.


That is what I currently do. It turns casual browsing into a frustrating scavenger hunt. The whole point of the web was to make links effortless so you could browse sites. This breaks that whole model.


Very few sites are broken with ad blocking. If you click on one, you just press the back button. No need for a mental blacklist.


Yes, you can do that via whitelist/blacklist: https://docs.pi-hole.net/guides/misc/whitelist-blacklist/


[flagged]


> allowlist/denylist

As of now, it is called whitelist/blacklist in PiHole [0]. Maybe it will change, maybe it will not, but there is already a place to fight that battle [1] and it is not HN.

[0] https://docs.pi-hole.net/guides/misc/whitelist-blacklist/

[1] https://github.com/pi-hole/AdminLTE/issues/1448


Why are people being so negative about this?

If the terms whitelist/blacklist are hurtful to some people because of all the racial baggage we've applied to the words white and black, why not switch to allow/deny instead?

Using allow/deny is more explicit and doesn't rely on the benign cultural associations with the colors black and white. The choice of colors used here is arbitrary. For example, one could just as easily use green/red in reference to traffic signal colors. Ask yourself, would it bother you if we used blue and pink for allow and deny? What if we used blue or white as synonymous with deny?

Two good reasons exist to change our habits, basic manners and clarity.

I'm sure I'll use the terms blacklist and whitelist from time to time out of accumulated habit. But there's no reason for me to cling to those terms. Being gently reminded to use objectively clearer terminology shouldn't engender hostility on my part. I try not to be an unpleasant person, part of that is when someone tells me my behavior has a negative impact on them, I try to listen to what they say and modify my behavior--while actually effecting change can be hard, the underlying concept is pretty simple.


There is a real cost to changing APIs/documentation/UIs. My experience talking to black (one African, one European) coworkers is their reaction is "That's the problem you're going to fix?". When the company does a companywide initative to remove "problematic" terms from APIs/documentation, but doesn't stop funding of politicians who support voter suppression that predominantly affect black people in real practical ways, that bemusement can even turn to offense as they feel placated.

Of course, my coworkers don't represent all black people, and especially wouldn't claim to represent African Americans, but if even black people can hold this opinion, are you surprised others don't see this as worth the effort to change?


> There is a real cost to changing APIs/documentation/UIs.

This is an OSS project. If someone cares enough about it, they should submit a (non-breaking) patch along with a patch for the documentation. There are no costs to people who don't find it a valuable change.

> My experience talking to black (one African, one European) coworkers is their reaction is "That's the problem you're going to fix?".

Obviously this isn't fixing any of the fundamental issues, but it does bother some people. My preference is to respect the people who have problems with it. An easy policy is to simply avoid creating new software which uses that terminology and to accept any patches which fix it. That way the people who feel the change is important bear the burden of the cost (which is likely small some thing like this).


Whitelist/blacklist have their origins in terms from the 1400s and nothing to do with race (they have to do with criminality). Twisting their etymology to fit some kind of racial bias is sort of weird.

And throwing aside 600 years of clarity for "basic manners" also seems rather weird. Sort of like banning the word "engender" because a small minority might find that to be offensive. It isn't clearer to use a different word than has been used for over half a millennium.


For a while, people were getting in trouble for using the word, "niggardly," even though it had nothing to do with the offensive term that it sounds like.

https://en.wikipedia.org/wiki/Controversies_about_the_word_n...


The difference being that the controversy around white/blacklist only appeared after someone said it was a controversy in 2018, which is extremely recent, and the wording doesn't contain any phonetic similarity to a term from slavery. Being able to be misheard is more of a problem when phonetics clash.

Should all terms for the colour-that-is-somewhat-the-absence-of-colour now be banned? Is Vanta Black now racist?

Manufactured controversy leads you down a path of absurdism. It isn't helpful to the people it purports to help, whilst granting the vocal group the ability to say they're being helpful whilst actively ignoring any actual problems.


Oh, I agree. I think whitelist / blacklist is a manufactured controversy.

But my point is that, if people took the time to learn the background, perhaps (in both of these examples, and others), we could avoid the kerfuffles.


Blacklist/whitelist are not used consistently, so the clarity is not there. You can't see whitelist and consistently know whether it's going to be an allow list or a deny list


I don't believe I have ever seen a single example of a whitelist not being a list of exemptions. Nor can I seem to find any.

Nor can I find any example where blacklist is not a list of denied subjects. A blacklisted person, website or process is immediately clear within their context.

Where the clarity is lacking is not clear to me. However the mismatch between "allow list" and "whitelist", is. The latter seems to have a different meaning altogether.


>objectively clearer terminology

Sorry but I find this claim (which I've heard from others too) ridiculous. "Blacklist" is an actual common English word in the dictionary. "Denylist" is an incredibly awkward-sounding neologism without any context or history behind it. There is no way that "denylist" is the "objectively clearer" one here.


I suspect it is the perception that it's a bit pedantic to correct an otherwise correct answer. I agree with you, but also don't really think it needs to be corrected every single time someone posts whitelist/blacklist.

EDIT: apparently setting allowlist/denylist won't work so it's not just being pedantic, it's wrong.


You and everyone else who exhibit this are reading into things that don't exist. Language has context, words are part of language and so therefore words have context too.


Exactly. And using white/black as synonyms for good/bad may be creating context (connotations, really) that we don't want. It would be fine if we hadn't already overloaded those words to refer to people... but, here we are. In the context we've created. ¯\_(ツ)_/¯


The original poster used the terms used by the technology. The best choices for changing this terminology would be to write a treatise for HN consumption (to reach the community at large) or to contact the authors of the technology that use this terminology (to fix the origin in this case). Sniping a 'random internet poster' is just lazy trolling.


A black celebrity (forget who) said that he came to the realization growing up that the only positive connotation he could find for black was "in the black" with regards to finances.

So, I kind of see the point.


The downside of that is being 'in the red', which is also potentially problematic.

To fix the problem, we either have to stop referring to any metaphor/symbol involving color with negative connotations; or we have to stop using color to identify and refer to people. I think the former is good for precision (allowlist/denylist are great identifiers in that regard), but won't really solve our other problems; while the latter is probably better for human dignity, mutual respect, and combating our propensity for tribalism/racism. (Or, why not, we could do both.)


Man I can't wait until I get special treatment because I drive a vehicle of color.


Really? Is this not Doublespeak?


Not quite. A number of applications use allow/deny for access control. I’ve seen allowlist and denylist more than ten years ago.


You can whitelist yes, or there's an option to disable the entire thing temporarily for x minutes.

Yes you have to log in to the interface unless you engineer a way around it


On iOS there is an app called piHoleRemote that had a nice widget that allows you to disable pihole for x minutes.

Can be nice to use to quickly disable pihole to get through to a particular website.


Yes, you have to login to disable but you could easily use the API. For instance, pihole.disable(60) with https://pypi.org/project/PiHole-api/


My solution to this is using Cloudflare Warp (Cloudflare's consumer-facing VPN).

When I need to access ads.google.com or analytics.google.com for my company, I turn on Cloudflare, and pihole is bypassed.


I run Pi-Hole on my network as well; it's wonderful. I'm terrified that it will stop working soon though, as companies start to use their own DNS servers, which I've heard is happening.


Interesting. I would think though that the move to their own DNS servers could extend to their own ads as well — that is, cutting out the middle man that is Google/etc.

I'm all for news sites, for example, hoisting ads if I knew they were getting the money from those ads, knew the ads were actually coming from their site.


Sorry, I didn't mean their _own_ servers, I just meant hard-coding 8.8.8.8 into the DNS settings, for example.

I wonder if you could hijack those requests at your router and send them back to your Pi-Hole? But then they just switch to DNS over TLS...


I just have my network block outgoing DNS queries that aren’t from the gateway. But you’re so right that DoH is going to throw a wrench in this.


If an ad can use DoH to sidestep a firewall, so can an employee. If Google and Facebook were cunning (and nefarious, but that much is presumed), they would be aggressively developing a product that solves this problem for corporate networks, but at an enormous cost. Otherwise, when corporate networks solve this (and they will), home users who hate ads will just follow whatever pattern they settle on.


Encrypted DNS is going to upset a lot of corporate network management/monitoring... But the switch to HTTPS for everything caused similar frustration for IT admins awhile ago.

In the corporate world, I think the future is managing your network by managing every single device on your network. Only let authorized/corp devices in and all those devices must be enrolled in an MDM solution that enforces all sorts of policy and includes monitoring traffic/DNS queries. Of course that's a lot more work than just monitoring things at the network level.


I'm not sure I understand how this is related to pi-hole.

Your computer sends the raw domain name to pi-hole (e.g. ads.google.com), and pi-hole returns 0.0.0.0 if it's on the block list.

There's nothing Google or anyone can do to make pi-hole stop working.


IoT devices are starting to use hardcoded DNS servers instead of using the one provided by DHCP, which negates the benefits of Pi-Hole on those devices.

For now, I've configured my router to force all UDP port 53 traffic to my Pi-Hole which overrides what I mentioned above.

But, in the future we may start to see IoT Devices hard-code DoH servers which will be harder to force over to the Pi-Hole.


> I've configured my router to force all UDP port 53 traffic to my Pi-Hole

Oh wow, how do you do that?


This is pretty much why Google are a huge proponent of DoH.


Would you mind sharing the blocklists you use? I have gotten to a ratio like that, but I have noticed that it was causing more issues with regular websites for my guests, so I removed many of the custom ones. I'd like to try some others if you have suggestions.


  [i] Target: https://raw.githubusercontent.com/StevenBlack/hosts/master/hosts
  [] Status: Retrieval successful
  [i] Received 59896 domains

  [i] Target: https://mirror1.malwaredomains.com/files/justdomains
  [] Status: Not found
  [] List download failed: using previously cached list
  [i] Received 26854 domains

  [i] Target: https://s3.amazonaws.com/lists.disconnect.me/simple_tracking.txt
  [] Status: No changes detected
  [i] Received 34 domains

  [i] Target: https://s3.amazonaws.com/lists.disconnect.me/simple_ad.txt
  [] Status: No changes detected
  [i] Received 2701 domains

  [i] Target: https://dbl.oisd.nl/
  [] Status: Retrieval successful
  [i] Received 1167690 domains

  [i] Target: https://phishing.army/download/phishing_army_blocklist_extended.txt
  [] Status: Retrieval successful
  [i] Received 21379 domains

  [i] Target: https://raw.githubusercontent.com/deathbybandaid/piholeparser/master/Subscribable-Lists/ParsedBlacklists/AakList.txt
  [] Status: Retrieval successful
  [i] Received 5 domains

  [i] Target: https://raw.githubusercontent.com/deathbybandaid/piholeparser/master/Subscribable-Lists/ParsedBlacklists/Prebake-Obtrusive.txt
  [] Status: Retrieval successful
  [i] Received 3 domains

  [i] Target: https://gitlab.com/quidsup/notrack-blocklists/raw/master/notrack-blocklist.txt
  [] Status: Retrieval successful
  [i] Received 14724 domains

  [i] Target: https://gitlab.com/quidsup/notrack-blocklists/raw/master/notrack-malware.txt
  [] Status: Retrieval successful
  [i] Received 412 domains

  [i] Target: https://raw.githubusercontent.com/hectorm/hmirror/master/data/adaway.org/list.txt
  [] Status: Retrieval successful
  [i] Received 9182 domains

  [i] Target: https://raw.githubusercontent.com/hectorm/hmirror/master/data/disconnect.me-ad/list.txt
  [] Status: Retrieval successful
  [i] Received 2701 domains

  [i] Target: https://raw.githubusercontent.com/notracking/hosts-blocklists/master/hostnames.txt
  [] Status: Retrieval successful
  [i] Received 209608 domains


Noticed that some apps on iOS spammed the DNS server if they couldn't connect to their ad networks which should affect battery negatively.


I added this to my /etc/hosts

https://github.com/StevenBlack/hosts

What is the advantage of having DNS on a separate device other than that it provides ad blocking for multiple devices?


That’s the main benefit.

But also you can have more flexible block patterns. I run DNSCrypt-Proxy and my block lists can have wildcards. With /etc/hosts you have to enumerate each origin. It can also do things like IP blocking where if any domain resolves to a known ad network IP, then that request is blocked.

But mainly, DNSCrypt-proxy encrypts all my outgoing queries and round robins them across resolvers. (Also hi dheera!)


I use the noscript extension that uses a scripting whitelist. Bit of a pain at first, but pretty soon your browser will be flying. No extra hardware needed.

Kid's computer has dnsmasq as a similar solution.


My issue with pihole or any other DNS adblocker is I can't whitelist some website that I love. As evil as ada network, but I still want my favorite site to get some revenue.


Maybe your favourite site would welcome a direct contribution, instead of an ad click


Mine was that high, until I ditched all the family Android devices. It's now around 2-3%. It's quite an extraordinary difference.


I use NextDNS for this, it's brilliant. (I'm not affiliated with them in any way, just a happy customer.)


Same. I got tired of Pihole breaking for one reason or another (although I certainly adore the project). NextDNS works extremely well, provides a native app for every device, runs on my router, and is dead simple to maintain.


Hear hear. The only problem I’ve had was when I set it up on my router and my IKEA smart lights stopped responding after some 30 min or so. Turns out the gateway phones home and those calls were blocked, so for some reason or another the gateway just stopped responding to commands. Restarting it or resetting the network made it fly again, but only for the set time before it phoned home again. I was very disappointed by that, after having read some article here on HN arguing that IKEA had actually done IoT sort of right. Oh well.

Obviously not a NextDNS specific issue, it’d happen with anything that blocks the call, but just putting it out there for the next sucker that tries to google why their IKEA gateway suddenly stops responding.


Would Pihole affect latency in online games?


I'm going to be inevitable in the opposite direction: I don't think cross-domain requests were actually saving that much bandwidth. The common use case I could think of that would be JavaScript CDNs. The problem with that is that JS libraries update frequently - even something really common like jQuery has hundreds of releases, all of which get their own separately-cached URL. So the chance of two sites using the same jQuery version is low. Keep in mind that public JS CDN URLs are rarely refreshed, too - it's more of an indicator of when the site was developed rather than the latest version the site was tested with. So you could hit hundreds of sites and not get a cross-domain cache hit.

Even if you did share a URL with another site, the benefit is low compared to what you can do with same-domain requests. Most sites should be served with HTTP 2 already, which means even unoptimized sites should still load decently fast as requests aren't as expensive as they used to. You can get almost all of the same bandwidth benefits from a cross-domain cache by just making sure your own resources are being cached for a long time.


Mozilla ran the numbers and it's not a huge penalty.

It's just frustrating that it's one more optimization that is getting turned off. And makes the internet just a tiny bit worse as a result. It's like death by a thousand cuts.


A much more significant performance issue with web tracking is usually the absurd amount of JS loaded.

It's almost impressive how they manage to load so much crap. Just visit a site like mediaite.com, the list of trackers is damn long.


For more than a decade I've been campaigning (to any of my employers that utilise adverts on their platform) to drop adverts with the primary factor being that of performance for page load. The last time I looked, adverts were adding an additional ~35% load time to the page. Anywhere from 5% to _60%_ (!!) of vistors were navigating away before page load completed (Varied depending on company/product of course) and a staggering 80+% of those vistors would have had a full page load if the adverts were not there.


I often wonder incredulously whether developers responsible for particular sites really comprehend how bad performance has gotten. Browsing threads like this on HN makes it clear that they are probably well aware, but have no choice in the matter. In a way that's even more depressing because only a tiny minority of people are happy with the arrangement.


I walked away from one project. 3rd party scripts were not the only problem but were the last straw.

I have a community site I want to build. If it stays small I can probably run it for $20 a month all in and not pester anyone. But I’m still keeping my eye on some of the saner ad networks that use subject matter instead of user tracking to target ads. That might be an option.

Linus tech tips has a video where he gives us a peek into their finances. Among other things the merchandizing arm makes them about a third of their revenue, and no one advertiser is allowed to pay more than that, so they can maintain a degree of objectivity. I think a lot of us don’t want to approach sponsors so we feel sort of stuck with ad networks.

And I’m not much of a materialist but I’m a tool nerd (you possibly don’t need it, but if you’re gonna buy it, get a really good one) so I’m not sure how I’d do merchandizing, since I’m more likely to recommend a brand than have something made for us. That leaves what? Amazon’s “influencer” BS, which is more money for Amazon? Discount codes, which are untargeted consumerism?


> I often wonder incredulously whether developers responsible for particular sites really comprehend how bad performance has gotten.

For every site I've developed and have been tasked with adding adverts, and every colleague that I have worked with that has done the same:

- Yes, we are aware.

- Yes, we doth protest.

- No, we were not successful.


What was their rationale for not doing as you suggested?


Sunk cost is/was my conclusion. At one firm in ~2013, advertising was bringing in $600k revenue per annum - we estimated a loss of potentially triple that (!) - but the response was apathetic. There was very much a reluctance to accept that the "advert management team" (yep, they had a team dedicated to managing the adverts, who had the duty of "managing" Google Ads) would need terminating, too.


I worked a contract where we slaved to get our load time down to some respectable number, and then they launched the site and load time was multiplied by just the analytics software (it was a company website, they weren’t running 3rd party ads).

How demotivating. It was time to start thinking about moving on anyway, but I basically stopped trying to pursue contract renewal at that point. All that work (and uncomfortable meetings) so Google could triple our load time.


It's the chain of analytics.

You load one ad and they want their own analytics or they try to stuff multiple ads into the same slot so you get multiple analytics.

We clocked one ad at 800Mb loaded once.


Yup.

The really frustrating thing about this bit is that because it disables optimizations, it potentially impacts sites where they don't actually use tracking.


Note that this particular change does not apply to non-third-party resources. That's why performance impact is minimal.


This is akin to the whole class of CPU vulnerabilities we've seen (Spectre/Meltdown/CacheOut/...) where performance optimizations are at odds with security.


It is remarkably similar. If it weren't for the assholes trying to steal from us, our whole computing experience would be faster.


Browsers need to own tracking, and it's clear that Firefox and Safari agree.

I don't object to (silent, low resource, banner) ads, even targeted ones, as long as the targeted ads aren't building a comprehensive profile of me.

I think my ideal would be telling my browser a list of a couple interest areas (prosumer tech, sci-fi, dog peripherals) that the website could target on to serve ads. They'd get targeted ads, I'd get privacy, and I'd get ads that actually match things I care about.


Due to cache abuse I have all caching disabled on firefox and this is a nice move (even if I will continue to use it without cache).

Anyway one more thing that I can observe on Ubuntu 20.04. Firefox has become noticeably faster. I dont know if this is due to the fact that is not from ubuntu repositories or some serious optimizations were made.

"On Linux, the WebRender compositing engine is enabled by default for the GNOME desktop environment session with Wayland. In the previous release, WebRender support was activated for GNOME in the X11 environment. The use of WebRender on Linux is still limited to AMD and Intel graphics cards, as there are unresolved problems when working on systems with the proprietary NVIDIA driver and the free Noveau driver."

(Fax machine enthusiasts, please stop abusing the thread and move to Ask HN or something)


Blocking ads and installing some sort of cookie auto-accepter/deleter[1] is the best and mopst optimization saver which you can have without disabling javascript.

[1] https://www.i-dont-care-about-cookies.eu/


Or just add the filter list [1] to uBlock Origin

1. https://www.i-dont-care-about-cookies.eu/abp/


In many parts of the world, data still costs, and it annoys me that if you pay for 10Gb a month, the sites you surf to are a few kb, and then up pop the ads which are Mbs and steal your data allowance. You're actually paying for ads you don't want.


>but web advertising and trackers are already responsible for a huge chunk of performance issues already.

Indeed. The brave move would be to firefox to include built-in adblock, but I don't think Mozilla has the cojones.

>Of course we'll have the inevitable guy pop in here and talk up how awesome web tracking is because it helps sites monetize better, but that's all bullshit.

I think if adblocker usage became widespread we would in fact see the death of a lot of websites, but to be perfectly honest I kinda want that to happen because advertising is cancer.


Security in general is a performance and usability killer. If “attackers” were not a thing your internet would be much much faster, hell your smartphone wouldn’t need to encrypt itself or paying in a shop wouldn’t need a chip & pin.

What I’m saying is that a lot of applications have many attackers in their threat models, but advertisers have so far been out of scope.


Advertising destroys everything. If something is based on ad revenue, it goes to shit ultimately.

The latest casualty was podcasts. It's revolting.

Ad-based businesses need to be boycotted until this disease is in lasting remission.


> The latest casualty was podcasts. It's revolting.

Hmm?

Yes, there are adverts on all the podcasts I listen to. Many of my favorites offer members only ad-free versions. Usually I suffer through the ad supported versions because the adverts are easy enough to skip.

Some podcasts have too many adverts or annoyingly inserted advertising. Those are pretty 1 and done. No point listening to them.

IMO the (current) podcast market is a good example of how we can enjoy content and know the producers are compensated without having to deal with obtrusive marketing crap.

It is getting clear some podcasting is getting sucked into things like Spotify, but there is still enough good content I don't think it's a problem.


I absolutely don't respect having my weir podcast-friendship relationship with the host exploited by fully integrated ad pieces whispered to me in a trusted voice. That. Is. Sick.


I respect that podcasters are spending their time and effort putting together a program which I enjoy. The price I pay for that is listening to them talk about some product I don't give a shit about for a few seconds until I can hit the skip button. To me, there is a clear cut deal with no deceit.

Do you feel people should volunteer their time gratis to entertain you?

What Hacker News does with adverts slipped into the newsfeed is essentially the same as what podcasters do.


Please don't take offense in this, but your "advertisement doesn't influence me" take is quite silly and naive. Not being rational about the effects of advertisement is the whole point of these product placements. It is not information, but manipulation.

> Do you feel people should volunteer their time gratis to entertain you?

No. They can do as they please. I'd prefer them to do it for the sake of it, or charge in a direct matter like a subscription model. However they should not, never, try to manipulate me for their profit.

I merely said the scene dominated by enthusiasts/hobbyists changed to a predominantly revenue-focused environment. You see the same dynamic at play, which made youtube the signal:noise hell it is today.

Mind you, the German and American podcast scene were very different in that regard, as in America monetization of every possible creation seems to be much more common. I assume this is down to a different set of factual constraints and values.


With data caps you are paying to be advertised to.


Really? Chrome wants to protect against tracking? Isn’t that their business model?


♥ Mozilla.


How is that even legal


I know, right?

Like if I were to be caught doing this to a random woman it would be appropriately labelled 'stalking' yet when a company does it they potentially have a patentable marketing technique on their hands or something.


It's good when a company does it because they create value in the economy. It's only bad when a person does it because no value is created.


You forgot the /s. This is HN here you find people that can take that seriously and agree.


IMO the /s requirement applies anywhere. Sarcasm is dead, literal expressions are literally interpreted literally since 2018 and on.


Poe’s Law was named in 2005. Which was interesting news to all of us on Usenet for whom this phenomenon was already known before Eternal September or Green Cards stole the show.

Sarcasm was already dead before “spam” meant ads instead of scrolling a forum or chat window by repeating yourself (exactly like the Monty Python sketch it alluded to).


Sarcasm only works if it can succesfully comunicate that it is sarcasm; be it body language, face expression, absurdity, or memeing. On the internet you are a random faceless stranger to me, so how can I distinguish sarcasm other than guessing?

If the priors were the other way then people would complain that nobody takes anyone seriously.


don't know if you're sarcastic. However, just because a company makes money, doesn't mean value is created. Like when you win in poker against someone, you're making money but not creating value.


I mostly agree with your point, but it has to be said that poker players are creating entertainment value for each other. Even if the cash portion of the game is zero or even negative sum.


I am not who you were responding to, but I think playing poker would create entertainment (even for the loser) which could be considered something of value.


I wonder how many downvoted you because they understood the sarcasm but they agreed with the non-sarcastic interpretation of it.


In values we trust. Shareholder value that is


nothing is illegal if no one understands a thing


We detached this subthread from https://news.ycombinator.com/item?id=25917326.


> It is how they track people who have been suspended on the platform

That sounds like a legitimate interest to me.


Read 'The age of surveillance capitalism'. Engineers should understand the business models they create.


Engineers don't respect any subject outside of STEM, education like this would fall on deaf ears.


I understand this knee-jerk reaction, but please don't judge engineers by what they post on HN. This place is... odd. (as I'm sure you know!)

If I formed my opinion only from HN, I'd think most engineers love: big-tech, advertising, electric cars, Apple, tech-enabled tracking (autos, web, cell-phone, watches, exercise machines, music players - it's ok if business profits!), and tend toward self-righteousness, narcissism, and virtue-signalling.

Of course, most of us are just living our lives and trying to get by. I don't know where this self-important insufferable attitude comes from, but I suspect it's a few folks who are very noisy. Most 'normal' people don't spend much time posting to sites like these, so there is a selection bias. Sadly, I also suspect that this attitude is an advantage in today's environment. It is a mirage of self-confidence, and telling the two apart can be very hard (especially for a potential employer).


Hi! Systems engineer for two decades now. I have a deep respect for philosophy, natural medicine, photography and the environment. I suspect many other engineers would have interests outside of their profession.


That's not true; they respect the sciences. But only sufficiently "hard" ones like chemistry and biology.


You seem to think that the meaning of "STEM" includes anything that anyone applied the word "science" to. But no, the "science" part is precisely the "hard" sciences. E.g. psychology, economics and theology aren't included in STEM.


I thought it was STEM instead of HSTEM. Silly me.

Sarcasm aside, not all natural sciences are treated equally. There are differing attitudes towards astronomy, oceanography, and climatology, for example.


science is STEM no?


But not all sciences are respected.


There are plenty of categories of human for which this community would not stand for an overly broad, coarse generalization like that.

Personally I'm not even convinced your claim is effective as a prejudice. What I'll concede is many engineers I've met seem to be harsher than average on pseudoscience och some varieties of manipulative lies, but that's to be expected as they have distinguishing knowledge for such things to clash with.


They seem to respect their inflated salaries.


In America, what else is there? :3


Because enough people think making laws restricting companies in any way prevents "innovation". Corporations should be able to do whatever they want because if they were truly bad, they would just go out of business, right? It's the worldview of a third grader.


I don't know why your getting downvoted, this is clearly the dominating ideology of Silicon Valley.


is it just me or more people switching to Firefox these days?


That's my impression too. Not surprising though - Firefox has just recently started to get good again (trackpad support, GPU rendering, privacy protections etc), while Chrome gets progressively worse.


Not sure. People on-line are switching but I haven't been able to convince many off-line - Chrome is necessary for a lot of poorly coded sites.


> Chrome is necessary for a lot of poorly coded sites.

Just like IE was.

And just like in the IE days some of us are cheering enthusiastically for every better alternative while others are defend the incumbent alternative :-)

It will take time but if we all do something sooner or lesser the old "best viewed in IE6/Chrome" websites become an embarrassment to management and then it will get fixed ;-)

Edit: Same will probably (IMO) happen with WhatsApp now and possible (again IMO) even Facebook and Google if they don't catch the drift soon. I can sense a massive discontent with them everywhere and for at least a 3 different reasons: spying, ux and functionality regressions and also because of their stance on politics (ironically I think large groups on all sides of politics want to bludgeon those companies over various issues and few except investors really love them).


On the desktop there has been minor movement as percentage of the whole market, but up 9% on their own share.

https://gs.statcounter.com/browser-market-share/desktop/worl...


Chrome is losing market share since October? What changed?


I downloaded and looked at the raw data. The biggest reason seems to be the new Edge browser gaining popularity. It went from 5.8% to 7.4% market share since October. I'm not sure why this chart displays both IE and the old Edge, when together they're a third of the market share of new Edge.

Safari and Firefox also are up since October, but I'm not sure why that is. For Safari I suspect new Apple devices being purchased around the holidays, but that's just a guess.


Not according to their own metrics. Monthly Active Users and New Profile Rate are the relevant metrics, and both are in decline.

https://data.firefox.com/dashboard/user-activity


We detached this subthread from https://news.ycombinator.com/item?id=25917559.


Evergreen comment.


I switched in late 2017 when they released quantum or neutrino or whatever they called it, a huge performance release.

As a backend dev and security focused eng I have little reason to test drive changes in all browsers.

FF has been smooth and stable for me across desktop OSs. Having no reason to alternate between that and Chrome, I’ve been confused by people saying it’s slow.

It’s been, to my memory, a flawless experience for 3+ years.

On the flip side, Chrome is a spy app, and a cognitive perception of web devs it’s faster does little to move me to use it.


I've had a similar experience. My only gripe is that the Facebook Container extension / Multi-Account Containers[0] stopped working for some reason, and I haven't been able to get them working again. I love that I was able to sequester all of Google's real estate from all of Amazon from all of my work tabs, and so on.

[0] https://support.mozilla.org/en-US/kb/containers


The FB container is working Ok on my side. It's not helpful tho

May be try on new profile to isolate the issue.


FB Container never stopped working fine for me. You should be able to use them again, as we do.


Just interact with the Facebook chat in Chrome and FF, and you'll see that FF is significantly slower.

That being said I use 90% FF, 9% safari and 0.999...% Chrome, because FF handling of tabs/containers/add-ons offers superior UX despite the performance annoyances. IMO, obviously.


I think is down to the fact that most websites target chrome first as it has the biggest market share.

I've noticed reddit is rather slow in ff but other than that not really nocoed anything massively slow or broken.


[flagged]


You can already do it in the preferences of Firefox. It can break some websites though.


And I am stuck in an old Firefox version before they cracked down extensions.


By Supercookie they mean Evercookie, right? That seems to be what they’re describing.


Evercookie is a Javascript project that produces respawning super cookies:

https://github.com/samyk/evercookie

It's quite dead now, stopped working around 2017: https://github.com/samyk/evercookie/issues/125


Is either a formal enough term to argue that one is more correct than the other?


Doesn't NoScript do the same job?


Perhaps you are trolling? NoScript is a giant hammer that smashes 90% of the functioning parts of most modern web pages. This new feature in Firefox partitions caching in a way that mostly won't affect how a site works but will block one nefarious tracking technique.


It takes time to tune, but I find after a month of usage I rarely need to tweak things. The tweaking itself is eye opening as it really makes you more aware of what is going on.


It is just one or two clicks away to load website, it also safer to browse internet this way, in my no expert opinion.


No. You can track users across sites with the HTTP cache without running any JS.


I use both ublock and noscript, it was pain for a few weeks to get use to noscript, now I don't see going back using browser without noscript.


Can you please detect I'm using Firefox, and not show the "Download Firefox" banner on top? I'll be able to save a few pixels of vertical space.


Firefox takes privacy so seriously that they fail to detect you are on Firefox, if you are using Firefox!


But not seriously enough to remove Google Analytics from their website.


Google currently pays for Firefox development. Are you ready to be the money source instead?


I looked because I was curious, and on both desktop and mobile, in all browsers, the site's topnav includes their Mozilla logo, a Download Firefox button, and links to a couple of other Mozilla sites about Internet Health and Donate.

I imagine they could still choose to hide the blue button when you're on Firefox, but that wouldn't save you any vertical space, since the topnav menu of links and logo would remain.


It's there always in the same place so you can download Firefox for someone else, or for another device.


Good job Mozilla! Do what Google never will - put users' privacy front and center.

On a sidenote, I might now re-enable cache that I kept disabled (well - cleared on exit) because of supercookies. I don't care that much if a single page tracks me, but I _really_ don't want Google to track me across sites. If Firefox protected me against that.. they would have one very grateful user. :)

EDIT: this also highlights why Google is so invested in Chrome - they can make sure that privacy doesn't interfere with their money-making machine. They really are brilliant. Brilliantly evil.



So? Mozilla should still catch up (if/where needed) and surpass Google on all privacy fronts. The goal should be that Google can't track Firefox users in default configuration - rest assured, this will never happen with Chrome/Chromium/Edge, and probably not with any other Chrome-base browser either.


Thank you, now i am resting assured.


From the article: " These impacts are similar to those reported by the Chrome team for similar"




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: