Hacker News new | past | comments | ask | show | jobs | submit login
Purge site data when site identified via old tracking cookies (bugzilla.mozilla.org)
158 points by cryogenic_soul on March 5, 2020 | hide | past | favorite | 131 comments



I wouldn't mind going back to a JavaScript-less web experience. I know not all tracking is based on JS, but the browser provides so many heuristics this way: screen size, cursor location, installed plugins. Give me reasonably formatted HTML, and something a little bit more powerful than curl.


> I wouldn't mind going back to a JavaScript-less web experience.

My default policy is to not allow JS to run, so my experience is already mostly Javascriptless. And, I have to say, my user experience on most web sites is actually better when I don't allow Javascript to execute.


I run a similar sort of policy but I would say the number of sites I visit that display nothing but a blank page without javascript is more than half.

Sites that function without it are the exception, not the rule.


Well, yes, how well this works depends on what sites you tend to use. The vast majority of the sites I frequent either degrade gracefully in the absence of JS, or never used it in the first place.

I think there are three or four that both require Javascript and that I want to use badly enough to allow JS.

When I encounter a web site that doesn't work without JS, I just move on. But I understand that others may not want to do the same.


I have an ephemeral container (systemd-nspawn with -x switch) with almost-default configuration of firefox (only uBlock Origin added) for such websites. After I'm done, I close the browser and everything gets deleted.


I do approximately the same in macOS:

/Applications/Firefox.app/Contents/MacOS/firefox" --no-remote --profile "$(mktemp -d)


Mine has the advantage of the browser processes not having access to anything important, in case of a RCE vulnerability. An attacker would just see a vanilla debian install with no juicy user data, and only the ~/Downloads directory linked from my "real" system.


Isn't it what the "private browsing" option of Firefox does?


Mostly, yes. The bonus is you don't use your normal settings + extensions. Which also means ALL websites work, regardless of privacy related settings that might break some websites. So it's a handy alternative.


That sounds very cool! Do you have that setup documented somewhere, or perhaps even pushed it to some public repo?


I do not, sorry. I have to set up a blog some day, but I have been putting it off because most of the time I can't think of anything interesting to say. :)

Anyway, I was mostly inspired by an article on how to set up Steam in a container - it has all the details, including how to pipe Pulseaudio inside (so in my case, Youtube videos in Firefox can have sound). Except mine is debian-based (so debootstrap instead of pacstrap to populate the container).

http://ludiclinux.com/Nspawn-Steam-Container/


I've found that about half of the sites that render a blank page without JS are just setting style="visibility: hidden" on the <body> element. I cannot think of any good reason for browsers to continue to allow that CSS property to be set on that element. "Flash of unstyled content" is not a valid concern here.


"Flash of unstyled content" is not a valid concern here.

Except it really is. Most websites are attached to businesses in some way. If you see statistics that the flash of unstyled content means 10% of your traffic leaves the site after less than a second you do what you can to fix it, and unfortunately that's often hiding everything until it's ready.

Pragmatically, most businesses would give up users who don't like tracking long before they give up users who care about styling, because there are just a lot more people who care about styling. It's an unfortunate fact of web life.


> and unfortunately that's often hiding everything until it's ready.

These sites usually aren't successful in that goal. Sure, they may be able to prevent content from showing up until the webfont has been loaded, but in my experience it's still extremely common for content to seriously jump around as the ads continue to load, especially on smaller screens (mobile).

So using "flash of unstyled content" as an excuse just doesn't hold up: users still have to learn to give a site a few extra seconds to settle before it's safe to interact without the content reflowing under your finger to put an ad where you wanted to tap, and breaking accessibility "for the sake of preventing FOUC" is dumb when you still have FOUC.

A similar tactic I've also seen that is even more unjustifiably anti-user is when the <body> element has "overflow: hidden" set until some heinous script that does it's own poor implementation of smooth scrolling can get up and running. These sites are universally improved by blocking such scripts and enabling native scrolling. This is one of the reasons why I believe browsers should be bundling together a large number of permissions that are off by default for every site the user has not flagged as being a web app. Google Maps has a good reason to interfere with scroll behavior; a news article does not.


> flash of unstyled content means 10% of your traffic leaves the site after less than a second

Really?

Why does this happen at all? After all the "visible=false" is a styling..


Some people decide whether or not a site is worth looking at extremely quickly - if it looks "old" or "broken" they hit the back button immediately. They will wait for the first paint though, so it can be better to delay anything appearing until everything is ready. The impact depends a lot on demographics.


And the second part - where does the bug lie that this happens at all?


> "Flash of unstyled content" is not a valid concern here.

Well, presumably it is to some people, or they wouldn't be setting it…


A site that renders completely blank without JavaScript is a site that I don't enable JavaScript for. They don't want me to view it, and nine times out of ten I can find the information elsewhere.


A pious stance, for sure.

Yet I'm convinced you probably do enable js for various payment portals and govt/financial websites, and they often tend to go blank or loop out far more than the average site.

Apart from carefully cultivating a working noscript over years, the simplest solution may be to use a different browser for these sorts of interactions.

Reminds me to backup my whitelist. It's actually quite valuable.


On the other hand, I would hope that government and banking sites don't use ads and tracking cookies.


Agree but you can usually block such things perfectly fine on govt websites without losing any functionality whatsoever.


Maybe less frustrating than a site that works, and then doesn't. Like a checkout path with a hard dependency near the end.


What would be the ways to keep JavaScript virtually “off” but viewable?

I mean, it’s all the same framework-min.js working like a modern COBOL, not like custom mathematically significant compression or advanced dynamic P2P webpage distribution, right? Feels to me like a meta rendering engine could be created so none of the functions needs to be runtime evaluated or content be fetched by it or whatnot.


Trying to emulate running JavaScript will quickly devolve into running it, but slowly and poorly.


Hit or miss. I find a noticeable number of sites to be unusable. Some sites send completely usable HTML on the initial response and it's a great experience.

Back when this used to be my default policy, I'd whitelist sites that I deemed worthy. I had maybe 1-2 dozen sites (e.g., my bank among them) enabled.


On Mobile I've got NoScript active, but it does break a lot of sites. Most sites are easily fixed by allowing first-party javascript while blocking third-party javascript. Third-party javascript is most likely to be trackers and ads anyway.


Agreed. Particularly blatant example: Android developer documentation. Once you allow it to run the gstatic script, it slows to an absolute crawl.


If "reasonably formatted" means CSS comes along for the ride, prepare for tracking pixels behind onhover rules, and on and on we go...


Why would that matter? If the browser fetches images hidden behind onhover, then presumably it would also fetch plain <img> tags, so why not use those. If it doesn't fetch onhover images (which admittedly might break the "reasonably formatted" bit), then tracking pixels are useless because, well, it doesn't fetch them.


Because after you hover on something, it can be an indication of intent. Just loading an image when the webpage loads doesn't give you any insight into what the user 'interacted' with.


> Because after you hover on something, it can be an indication of intent.

Yes, that's (one of the many reasons) why javascript needs to die.

> Just loading an image when the webpage loads doesn't give you any insight into what the user 'interacted' with.

That's what I said: regardless of whether the browser, when loading a page, does or does not include hidden images in what it fetches, that cannot possibly tell you anything about what the user interacts[0] with after the page has loaded and browser is no longer communicating with your server.

0: edit: obviously excluding clicking links or other navigation-causing interactions, but that's a separate problem.


> > > Because after you hover on something, it can be an indication of intent.

> Yes, that's (one of the many reasons) why javascript needs to die.

You're missing the point. CSS allows you to load tracking pixels on hover, so you can do that without Javascript.


I was going to say "no it doesn't, that's blatantly stupid, you load things during page load, that's why it's called page load" but on further research, Firefox apparently does do that now. I hate everything and I'm once again glad I stopped updating years ago.[0]

0: I decline to treat the unintentional flaws of [Firefox] as more important than the intentional ones.[1]

1: https://www.gnu.org/philosophy/upgrade-windows.html


> Yes, that's (one of the many reasons) why javascript needs to die.

On hover can be easily done with css.

selector:hover{background-image: url();}

That can trigger a tracking pixel.


A browser that disables Javascript could easily circumvent this too, either pre-loading URLs in :hover rules or ignoring those rules.


By reasonably formatted, I would actually exclude onhover. To clarify, I meant a document that would be readable without any interaction. You've got me wondering if privacy-focused browsers like Brave have the ability to disable potential tracking mechanisms like onhover.


Disabling CSS :hover and :active states might break sites. Tracking pixels could be as simple as a background image set for either of those CSS states.


Browsers could disable downloading of external resources for CSS :hover and :active. When the user interacts with the interface, the CSS will still get applied but the tracking URL won't be accessed.


Alternatively, such URLs can always be pre-fetched on page load.


Agreed, but it would likely break things that don't matter much to me. A well defined semantic document should say what it has to say, without the use or necessity of :hover or :active states.


Turned off javascript is more about usability than tracking prevention.


Completely agree. I default to JS-disabled, and will sometimes enable it. More often, though I just move on. Of course if you want to use FB or other surveillance sites, this won't work well for you.

What methods are there to exert more control over the JS engine in Firefox? Screw performance, I'd have a lot of fun writing my own hooks/wrappers to overload certain method calls.


For Facebook, turn off JS and then go to m.facebook.com.

It is quite usable, and I probably wouldn't use FB without it.


And if you want messages access just use mbasic.facebook.com


Nothing stopping you from doing this on your next / current project. I've been 100% committed on getting the web back to as lightweight as possible for the last couple years worth of development.


Almost all front-end development jobs are looking for React or Vue skills.


An argument for non-interactive semantic html is an argument against the front-end/back-end dichotomy. That is to say your application should directly generate the page you want the user to see and you shouldn’t need to spend all your time designing custom JavaScript for it.


I agree, but no one is hiring down those lines. People are being paid to write JavaScript where it isn't needed per se, but where it will still get the job done.


There's nothing wrong with non-interactive html web pages just like there's nothing wrong with JS rich web applications. The web is big enough for everyone.


There absolutely is a place for the interactive web. But this is rare, and it’s done well even less often.


You can server-side render React (not sure about Vue, but wouldn't be surprised if you could).


Yes, you’re right about Vue having server side rendering. It’s called Nuxt.


Nuxt is a tool that sets up your entire project including the option for server-side rendering. There are other SSR tools for Vue, and I think in Vue 3 it might be built-in. There's a lot of focus on server-side rendering lately.


I wish there was simply a way where ad-supported sites I visit could collect their revenue without having me submit to advertiser surveillance. I don't mind seeing ads. I hate being tracked.


I pay for the LA Times, and I also block ads. A couple weeks ago they updated the site so that I got a giant “you are blocking ads!!” banner even when I was logged in. I complained to them (directly), saying that I pay for a subscription so that I can give them money without viewing ads. And... they agreed that paying readers should not see those warnings! And they updated the site within the same day.

I thought that was cool.


I just block the “you’re not seeing my ads!” div itself. Some of us have issues that prevent us from focusing on the content when there are highly contrasting alerts screaming something.


I think the issue is verification that an ad has actually been shown to a human. Even print and TV advertisers can have companies do audits to verify ads get run as expected.

Thing is, nobody ever said ad supported sites have to be viable.


>verification that an ad has actually been shown to a human

Is it useful? https://en.wikipedia.org/wiki/Banner_blindness


> Thing is, nobody ever said ad supported sites have to be viable.

That's true, but if a site is not ad-supported, and "paywall" is almost an epithet (and circumvented to boot), how is any site supposed to remain viable?


Think of the poor buggy whip makers!

Not every desirable activity in life is profitable. If your business plan is "make website -> get money" then perhaps you're in the wrong business.


I was thinking more about investigative reporters and news outlets than buggy-whip makers.


If you're going to treat investigative reporting and news gathering as a profit making venture then you can only charge what the market is willing to pay, and for the vast majority of people, that's nothing.


Investigative reporting is in a tough spot in a capitalist democracy. In a democracy, good reporting is a vital part of the system, but in capitalism, people pay for what they want to see and hear, and that's not always the truth.

Advertisers directly or indirectly impacting the kind of news that gets reported doesn't help.


>> but if a site is not ad-supported, and "paywall" is almost an epithet (and circumvented to boot), how is any site supposed to remain viable?

They're not. Think about news sites that rewrite stories from paywalled sites and collect ad revenue. They are parasites that make money from other peoples work. Same thing for all those YouTube channels (though they are not other sites) that repackage other peoples content and make money doing so.

What we need is not so much a way to advertise on the net, but a way to make small payments and subscriptions simple. If sites needed their users to pay them, the quality of the content might go up dramatically. Places like HN could still exist just fine too.

Having said that, I think there are ways to verify ads without cookies and such. It's just that it takes more effort on the server side.


> If sites needed their users to pay them, the quality of the content might go up dramatically

What do you mean by "if"? There are already many sites that require users to pay them (e.g. The New York Times) but everyone simply circumvents their paywalls.

Every method that high-quality sites use to generate the revenue they need to operate is defeated either by ad blockers or someone reproducing their content outside their paywalls. You may be right that news and reporting are not viable, but it is a shame.


Some sites handle their own advertising, and a few do it in a really cool way. For example, the Penny Arcade web comic only advertises games they like[0], and they hand-draw their own ads for it in the same style as the comics. Sounds like incredibly valuable advertising for those games, and it fits the style of the site.

[0] Or used to. I haven't checked them in years.


The reason it's worth it for advertisers to pay sites for placing ads is because of the data they collect.

You only buy something because of an ad occasionally. But data is collected each and every time.


Perhaps create a new and separate Web, that's designed from the ground up to take into account the lessons of the past 25 years, and provide a more mellow experience?

It would be an eternal niche, but it might be a nice, cozy little niche.


Most site interactivity could be handled with forms (though some form inputs need improvement) or some sort of conditional view state (for menus and HN's collapse comment feature). For anything more complex I think the "click to enable" model that flash and applets eventually followed is compatible with anything that actually deserves custom code.


I’ve given this a lot of thought over the years and I think html needs slightly more functionality to completely obviate JS. The hardest impact of no JS is dynamic input forms, and I remember when ajax first became a thing and forms that didn’t lose their content on an input error became commonplace. There was never a good reason for them to lose their input, of course, as that’s just lazy coding.

But when writing JS-free front ends for my web apps, the ugliest UX is trying to fill out a form that requires either filling out n identical sections (eg a fieldset for each student you want to add) or selecting an item from a drop-down w/ the option of adding to it if the desired option doesn’t exist.

Error validation is now clean and easy with modern MVC frameworks that take as input the same structure of data they used to generate the page in the first place, so it is easy to rebuild the page exactly as it was but also include an error message. There is no jarring user experience when the page is submitted and reloaded with the error shown inline. But having to navigate to a separate page to add an entity then refresh the form to fill out the data selecting the newly added entity sucks.

HTML needs some sort of functionality that would enable a) dynamic non-Turing-complete remote resource backing of form input values, b) allowing a form input to return post submission to a dom element rather than a document (E.g. imagine an input to “add location” to a drop down; submitting that field would return the new content of the select option elements to replace the existing). A JS powered example is intercooler JS.

More generally, a content response type that says “patch the existing DOM with the following” would preserve pretty much all the good parts of Web 2.0 without the crap that came with it; keeping in mind that submissions could only happen when a user explicitly requested them.


Hear me out - we could all start using Gopher?


> "I know not all tracking is based on JS, but the browser provides so many heuristics this way: screen size, cursor location, installed plugins."

Installed plugins? Why would javascript need to know my installed plugins? I'd like my browser to more actively restrict what javascript has access to. I get a popup when it wants access to my location (which I generally deny). Why not do the same with these other features?

"This site wants to view your installed plugins. Allow/deny?" "This site wants to set a non-login cookie" Deny, deny, deny.


I don't think they can see the plugins themselves but any code that the plugins run in the document context will throw exceptions that can be caught by any fingerprinting code. I see a variety of uncaught exceptions with obvious plugin names on my Sentry feeds.


"I wouldn't mind going back to a JavaScript-less experience."

By not using a graphical, JavaScript-enabled browser, I have been experiencing the web this way for the last 15 years. It works just fine for the purposes I use if for, mainly informational retrieval. For me, there is no such thing as "page load time". This shifts all awareness to "server response time". This is more or less the same from one website to another and therefore differences are not noticeable, unless a server is misconfigured or has some problem.


How do you collapse comments on HackerNews?


When I started visiting HN around 2008-2009, if I am not mistaken, the website used no JavaScript. Anyway, in the browser I use, HN is "flat" threads, all posts visible and left justified, no indentation. It still looks just the same as it did in 2008-2009.


I keep JavaScript on, but I rarely collapse comments. I could probably browse Hacker News without JavaScript if I wanted to, because they've taken care to make most of it work.


How do you deal with pages that use JavaScript to fetch page content after the initial markup is loaded?


I'm not 3xblah, obviously, but I deal with those sites by not using them.


I wish that was a choice for me. Often I have to interact with websites for services such as phone accounts, banks or taxes where I don’t have the reasonable option of choosing to not use the site.


Oh, that sucks. I'm lucky -- I can do all those things without having to bring the internet into the equation.


I could make a phone call, but I don't see how that is making my life better.


"How do you deal with pages that use Javascript to fetch page content after the initial markup is loaded?"

Provide an example page and I will demonstrate how I would solve the problem.

Not every user visits the same websites and web pages, so without giving specific examples, discussions about how to deal with these pages never go anywhere on HN.

To be honest, out of all the websites I have visited over entire lifetime using the www, the number where I have had to make any extra effort because of Javascript in order to retrieve some text/html, image or video is very small proportion. Not one that is large enough to justify using a JavaScript-enabled browser as default. For me, these are exceptional cases, not the norm.

The extra effort is usually a one-off script, not something I need to save.

Occasionally it is something I save for future use. One example of a saved script would be for non-commercial YouTube channels. Goal was a 2-column CSV of all videos from a channel in the form of title, url. Goal was not "perfection", just quick solution.

yy025 and yy032 are custom utilties for generating HTTP and decoding HTML, respectively.

Using a short script called "ytc" the process would be something like the following. openssl s_client is used as an example of a TLS client. "XYZ" is the name of the channel.

   echo https://www.youtube.com/channel/XYZ/videos|ytc|sed wXYZ

   Connection=keep-alive yy025 < XYZ|openssl s_client -connect www.youtube.com:443 -servername whatever -ign_eof > 1.html

   ytc title < 1.html > XYZ.1

   ytc url < 1.html > XYZ.2

   paste -d, XYZ.[12] > XYZ.csv
Here is the "ytc" script

   case $1 in 
   "")exec 2>/dev/null;
   export Connection=close;
   yy025|openssl s_client -connect www.youtube.com:443 -servername whatever -ign_eof |sed 's/%25/%/g'|yy032 > 1.tmp;
   while true;do
   x=$(sed 's/%25/%/g;s/\\//g' 1.tmp|yy032|grep -o "[^\"]*browse_ajax[^\"\\]*" |sed 's/u0026amp;/\&/g;s/&direct_render=1//;s,^,https://www.youtube.com,')
   echo > 1.tmp;
   test ${#x} -gt 100||break;
   echo "$x";
   echo "$x"|yy025|openssl s_client -connect www.youtube.com:443 -ign_eof > 1.tmp;
   done;rm 1.tmp

   ;;-h|-?|-help|--help)echo usage: echo https://www.youtube.com/user/XYZ/videos \|$0;echo "usage: $0 {title|url} < html-file"
   ;;1|title) sed 's/\\//g;s/u0026amp;//g;s/u0026quot;//g;s/u0026#39;//g'|grep -o "ltr\" title=\"[^\"]*"|sed 's/ltr..title=.//'  
   ;;2|url) sed 's/\\//g;s/u0026amp;//g;s/u0026quot;//g'|grep -o "[^\"]*watch?v=[^\"]*" |sed 's,^,https://www.youtube.com,'|uniq
   esac


I've moved my blog long time ago to static pages. I've removed Google Analytics long time ago too. Now I have a slickest web experience and the reports of searchs and server usage is increasing.


Alternatively since nojs breaks a bunch of things, also try uMatrix, really locks things down and shows you what kind of nonsense is going down in a nice (blockable) grid.


This just made me think of something.

How do the GDPR popups work if you don't have JavaScript enabled? Are the sites still GDPR compliant if they track you using cookies because you disabled the JS which should have disabled the cookies?


The JavaScript ones simply don't appear - which is nice. And yes, I believe they're in breach of GDPR if they use cookies and tracking pixels to track me without giving me the opportunity to deny consent. Please note I'm not a lawyer but I don't think there is a legal obligation for me to use their sites with JavaScript enabled.


uMatrix basically does that for you now.


Blocking trackers helps a lot for performance, and I second uMatrix. I even showed my 13 year old kid how to use it. However, uMatrix on mobile is not particularly easy to use (I use it, but would not recommend it).


I didn't even realise it worked on mobile, I use uBlock on Android but uMatrix otherwise (actually I think both, maybe because of synchronisation).

The zapper's fairly easy to use to get rid of annoyances on mobile, until you make a mistake. Editing the rules file isn't the nicest UX even on desktop FF. (It'd be good, for example, to be able to preview what's being blocked, and selectively re-allow them.)

For an example of how it can go wrong: you block 'overlay', then it reveals and you block 'grey-blur-modal-focus', allowing you to click whatever you wanted, except it turns out that it uses 'overlay' and you shouldn't have blocked that one.


The relevant code comment from the bug-linked changes is:

> This loops through all cookies saved in the database and checks if they are a tracking cookie, if it is it checks that they have an interaction permission which is still valid. If the Permission is not valid we delete all data associated with the site that owns that cookie.


And in v80 in alternate universe, Firefox will purge the sites from the Internet, after the data.

Somebody please point me to the portal...


That's called pi-hole and having it network wide is a game changer. 30% of all dns requests are blocked.


Pi-hole is on my TODO list for quite some time. Yeah, its definitelly a portal I should get to fast (summon, not get to, hence the TODO entry). Currently using uMatrix and friends but its not network wide and that sux a lot.


NextDNS works well without setting up anything new, too. I use it instead of pi-hole.


I fail to understand the widespread appeal of a Raspberry Pi (slow?) that only blocks requests on your homework network.


It is a DNS server that makes every ad and tracking domain route to nothing. It speeds everything up and uses an almost undetectable amount of resources. It could be done in the router with different software I'm guessing, but I use tomato firmware which isn't great for huge DNS block lists.


I mean, yes, I know what Pi-Hole does. I just fail to see why it's better than a hosts list on your computer, for example: that works wherever you are and you don't need fancy software for it. Plus it's "infinitely worse" than a normal adblocker when browsing the internet.


- It blocks tracking / ads on devices where you can’t edit a hosts list (your phone)

- It works for your entire network (everyone in your household)


> "It works for your entire network (everyone in your household)"

I guess whether this is the killer feature or makes it suck depends on whether you've got a lot of devices and family members in your household that you can now easily protect with a single solution, or you're a single person with a laptop that spends half the time (or more) outside your home.

It's fantastic for the first group, useless for the second.


Also, you don't have to choose one or the other.

I use both.

Ad blockers and Pi Hole each have pros/cons, but using them together gives you the best of both worlds (with the only downside being the overhead of running an adblocking extension or overhead of managing/updating a hosts file block list)


> It blocks tracking / ads on devices where you can’t edit a hosts list (your phone)

Right, but you can use adblockers on them.

> It works for your entire network (everyone in your household)

Fair, but as I mentioned it stops working the moment you step out, which I assume you do with your phone?


>> It blocks tracking / ads on devices where you can’t edit a hosts list (your phone)

> Right, but you can use adblockers on them.

The iPhone doesn't support ad blocking.


It very much does.


I didn't know this and I now see I can block ads on Safari on iOS which is great.

Unfortunately, it doesn't work for Firefox on iOS.


Elaborate? I'm not aware of a way to block in-app ads, or ads served when using Chrome / Safari / Firefox on iOS without using a DNS based ad blocker.


You can block ads in Safari or apps using SFSafariViewController by installing a content blocker. You can use a VPN-based blocker to perform DNS blocking on-device.


Devices such as Amazon Fire, Apple TV, your television, DVR, etc, etc do not support installing ad blockers.


None of those really have a browser on them, though…


Half the point of pihole is to block trackers (not just ads).

That means blocking Google Analytics, Crashlytics, and whatever other unnecessary analytics / reporting that apps on Apple TV, your smart TV, etc use.


Not much of what you said is actually true. Having it on your local network allows any wireless device to have a huge amounts of ads and trackers blocked automatically. Some sites were unusable on phones before and now they load fast and are readable.

Then there are things like 'smart' tvs sending data back, apps sending all sorts of data out etc. It is quite a game changer.

Also you can do both, but dns blocking for the whole network is always on for everything once you set it up.


Try Baitblock https://baitblock.app Baitblock has tracking resistance that also deletes first party cookies/other tracking mechanisms when it detects that you're not logged into a website.


It's a pity it's chrome only.


It's rather neat to be able to read a short diff, including a test, to see exactly what that means.

From my limited understanding: it purges cookies and localStorage, if the storage access API permission was not granted?


So the next race is going to be pinging all these cookies to keep them alive


Nah. The adtech people are already talking about persistent identification mechanisms to allow the same identification in the absence of third party cookies.

If you're privacy-minded, it's worth keeping an eye on these efforts, as some of them involve getting publishers to require a login and an email address or phone number from their users, then using that as the persistent identifier.

If that idea takes root, then we'll probably want to cancel accounts and avoid making new ones.


Or a sign-in with Apple-like system so tracking is limited to each site.


Best decision I have made so far is setting up a catch-all polic for my domain. any_random_address@mydomain.com is saved in a "dummy" inbox, I check it from time to time and give different emails for different services to identify who sold my email.

instagram is insta@mydomain.com, netflix is nflx@mydomain.com etc.

If someone needs pointers: I use webfaction for MX, setup mailboxes with catch-all policy and setup a rule to forward these email to my gmail and have a filter on gmail to skip inbox and save in "dummy" category.

p.s. if someone has a better alternative to webfaction for email only stuff, please let me know, not sure if I can do it with an other provider that is cheaper.


I use fastmail, and there is no additional setup needed. Just point the catchall at an account and use the filters to sort as appropriate. Mail from my legacy gmail, yahoo, and hotmail accounts get forwarded to respective aliases at mydomain.com and sorted into appropriate folders as well.

Best online hygiene decision I have ever made. Ranks up there with installing ubock origin on desktop/mobile.


That's grand until someone runs a dictionary spamming attack on your domain. One of the interesting complexities here is if you forward to gmail from a domain, and there's too much spam, it'll blacklist your mail forwarder as a spammer.

(n.b., I've done something like this for ~nearly 20~ 23 years, and I've scaled back to prefix+tag with some aggressive blocking of email addresses that have been leaked/sold)


I've used migadu.com for well over a year now and been very happy with them. They let you setup multiple regex based catchalls, so I can create any address prefixed with shop and have it forwarded to one address, while ones prefixed with game get forwarded to another. If you aren't planning on sending any emails from the domain, their free single domain plan might work very well for you.


Occasionally I have to send emails especially when I have to respond to a customer care response. But will take a look, I am willing to pay for this(I am already paying for webfaction).


I use hcoop.net and the wildcard address is one line in a configuration file. I do a little bit of filtering with a sieve filter.


So, another popup notification to add site's cookies to whitelist.


I have been using “Cookies Autodelete” for years. Except for a short list of 20 sites, nothing can store data on my Firefox.

These tracking sites would not be able to show me ads anyway because I have “uBlock origin”

And finally “I don’t care about cookies” to automatically dismiss these stupid GDPR “we’re going to use cookies” prompts.

Without these the Internet feels broken.


>I have been using “Cookies Autodelete” for years. Except for a short list of 20 sites, nothing can store data on my Firefox.

Not really. It doesn't delete indexeddb for instance.


    delete indexedDB;
in userjs is all it takes in Chrome to disable indexedDB permanently


It is still possible for websites to access indexedDB by using Web Workers:

  new Worker("data:application/javascript,console.log('indexedDB: ', indexedDB)")
This works since userscripts only run in top-level websites and frames, but the above code runs JS code in a seperate thread with no attached DOM.


    if ('serviceWorker' in window.navigator) navigator.serviceWorker.register = () => new Promise( function(resolve, reject) {} );
disables webworkers from ever working.


And cache, local storage, basically anything that changes state.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: