I call it the privacy doom principle. Any information which separates you into a subset of a larger group can eventually be compounded to fully break your anonymity.
I did a lot of work on privacy coins, and the power of statistics is staggering. Doesn't matter if you shield yourself by grouping with 100,000 people per transaction, if your anonymity set isn't _everyone_, eventually you can be singly identified.
Same goes for browsers, tracking, and "anonymized data".
There's an old post about the Anime Death Note and the "bits of entropy" in relation to anonimity. It boils down to enough true/false questions about a person is enough information to uniquely identify them.
> The researchers generalized their Netflix work to find isomorphisms between arbitrary graphs (such as social networks stripped of any and all data except for the graph structure), for example Flickr and Twitter , and give many examples of public datasets that could be de-anonymized—such as your Amazon purchases (Calandrino et al 2011 ; blog).
Especially if you can belong to multiple 100,000 people groups. It doesn’t take very many until you can find an individual by looking at the intersections.
An example everyone knows is “twenty questions” - applied to people it’s not hard to narrow you down to “you” having never asked a question that doesn’t roughly divide the population 50/50.
Privacy coins can work if you can maintain the property that _every_ transaction could plausibly be spending _any_ historic output. For the most part, that's just Zcash-like coins
The purpose of the blockchain is to prevent double spends. For bitcoin that's done by maintaining the set of unspent transaction outputs. AFAIR Zcash maintains the set of spent serial numbers instead, so it scales worse in that regard.
Monero fails the privacy doom principle, eventually analytics will be enough to fully break it. In the long term, it's not really any safer than bitcoin.
Can you elaborate? It's my understanding that Monero uses ring signatures and that it's impossible to tell which key actually signed the transaction. It's also impossible to correlate multiple transactions without external metadata such as exchange records. This implies there's no way to compute intersections of the set of keys in all related transactions until only one element remains.
What is the point of a transaction you can't eventually confirm whether it happened or not?
Monero may utilize a heaping helping of obfuscatory steps, but those steps need to be consistent, and reversible. Otherwise, you have no guarantee that someone well placed hasn't modified the record in all the right places. Nation state's sole reason for existence is to implement solutions to these kinds of hard-to-reverse problems by sponsoring tooling and regulation to ensure that knowledge graph can be traversed. Either through mandatory reporting of the requisite metadata by exchanges that handle those transactions en-masse and who make money off it as a pre-condition of doing business, or by banning systems and instituting penalties for being discovered to use such an unreversible system.
This is the part that blockchain people don't get. If you have a public ledger, it's a matter of time until LE and regulators figure out how to legally constrain you. You may have a chunk of time when you can get away with untraceable transactions, but measures will eventually be implemented that will practically break anonymity for a lawfully operated business if you keep any sort of bifurcated record.
Then there's the final measure of making it impossible to legally reenter the fiat market from these types of assets as well. Successful money laundering is already fairly difficult. Without a way to feasibly re-enter, it becomes that much harder.
While in theory you could have two monetary instruments that fuel disjoint economic systems, there still has to be a cross-over point, and you can bet that that will be the point of maximum scrutiny.
Not saying it's impossible to make work, but don't expect everyone else to make it easy for you. The glacier may not move quickly, but it does in fact, move.
Did you publish any work on your privacy coin findings? If not might you or someone else have some links to share regarding their strengths and weaknesses?
Your comment reminded me of this minutephysics video about the census.
https://youtu.be/pT19VwBAqKA
It a great walkthrough of the "privacy doom principle."
Switch browser while you can. Firefox might not be perfect (or even getting slightly worse), but at least it's an alternative and I can easily say it's better than Chromium in most. At least ad blocking worka as it should.
I ignored this advice for several years but ~6 months ago switched to Firefox cold turkey and don't miss Chrome one bit. Even when doing web development (I thought I'd miss chrome's CSS/HTML/JS inspector and devtools in general but Firefox's are the same if not better)
I find the dev tools of firefox even better. The ability to see XHR request content/header/responses directly in the Javascript console has been invaluable to me. It annoys me that chrome's console does not have this. You can only see when a request is made, but you have to hop on the network tab to see the actual content.
I've been a Tree Style Tab user for many years, but I have to confess I have a complicated relationship with this add-on. I've reached >600 tabs more than once. That's not only a feature.
I used to use Tab Mix Plus on Firefox. Having three rows of tabs and the ability to scroll them vertically for more tabs was the absolute killer feature for me. I loved Firefox for this.
Once Firefox moved to the per-process approach and removed the ability to hack the UI, I saw no more reason to stay on what was a terribly slow browser back then, compared to Chrome. Startup times of 10+ seconds and such shenanigans.
I have noticed a recent decline in the debugging features of FF -- downright buggy. View source shows a form I used 3 pages ago, not the form rendered. I now switch to chrome just for debugging.
You don't deserve the downvotes. We are using mainly Firefox at work and sometimes, not often but sometimes, Firefox refuses to load the current file in the debugger. The only solution is to restart Firefox. I get why some are annoyed by this when they are in a debugging session. Although the last time it happened to me was one or two Firefox releases ago. Maybe it got fixed.
I have a similar issue where very occasionally the debugger tab will just be empty, just absolutely no files in there at all. The fix is simple enough - just open the site in a new tab, although it's a little annoying.
I still do all my development in Firefox regardless, I'm sure if I switched to Chrome I'd quickly discover a set of equally annoying bugs and quirks there too. Better the devil you know.
That's interesting. I have somewhat recently encountered the same thing with Chrome. I don't know what causes it but when it happens, the debugger hits and doesn't show me the context at all. But if I re-trigger the debugger again, it shows me everything just fine. :shrug:
When struggling with a persistent issue like this in FF devtools it can still be worth filing an issue on the bugzilla tracker. Worst case, it gets closed as not reproducible. In practice many of these issues will eventually get caught if enough people complain about them and someone manages to dig through all the reports and come up with theories about the issue.
You may get a helpful reply from someone on the team with suggestions on how to troubleshoot it, like enabling specific logging flags or pulling some info out of the console.
I've filed lots of bug reports against Firefox in the past and just because you don't have an isolated reproduction case for a devtools issue, that doesn't mean it can't be fixed.
Yeah strong agree, apart from the mess up with Firefox on Android.
Both at home and work, Firefox desktop (with uBlock Origin) has been a pretty frictionless tool in terms of my browsing experience these past years, across Linux, Windows, and Mac machines.
From a quick glance, this seems to be even further down the "messy" side of the firefox-on-android mess. I.e. its capabilities are even more restricted.
Which is not to say it's not useful, and TIL - I didn't know they had released this, so thanks :) But I don't think it particularly applies to this thread.
What is the 'messy' problem with Firefox on Android? I have moved those I help with tech to Firefox. My mother for example. They don't know the difference between "the internet" and "Firefox" but so far they run Firefox with UBO with no problems (well not anything new that wasn't there with Chrome too but that is a old people Vs tech problem not unique to Firefox).
The messy problem is that you used to be able to run uBO with no problems on Firefox on Android. And most other extensions, with some obvious limitations (e.g. desktop-only UI extensions didn't work, some UIs weren't mobile-friendly, etc).
Then they released a preview of a re-design which also broke all extensions. That's arguably fine for a preview, though a bit concerning. Many were raising alarms at this point.
Then they released the re-design to the stable release, with still-broken extensions. This pretty unambiguously is "a mess", if not earlier.
Then they released built-in support for a couple dozen Mozilla-selected extensions (uBO included, I believe). This is still a mess, and rightfully raises a few eyebrows.
... and we're still there now, after over a year of "this will be fixed soon". I believe you can install nightly + manually tweak config and still install other extensions, but Firefox for Android does not support extensions right now. That's A Problem™, and not a good sign for extension-longevity that it was ever allowed out of preview. It broadly implies extensions are very low on their priority list, which is concerning, as extensions have been the clear leaders on preserving privacy and user control in general. Browsers overwhelmingly follow popular extension behaviors, not the other way around - cripple extensions and you also cripple advancement and experimentation.
To add to this they also blocked access to about:config on Android.
No access to extensions and blocking about:config was to me just heretic. Those two are the raison d'être of Firefox.
Nightly build thankfully reverted this babysitting of users but indeed we are still stuck with 'vetted' add-ons downgrade in the official Firefox release for Android.
Even on Nightly, the procedure to install non-whitelisted-by-default add-ons is somewhat arcane though, and AFAIK even after following it you can only install the current version of add-ons that have been officially published on AMO, i.e. no installing a different add-on version or manually installing an XPI (not even a signed one).
Please educate me: I am a Chrome user and I do rely on browser syncing my tabs and some passwords.
I know that Firefox also has a syncing feature ("Sign into Firefox", "Continue to Firefox Sync").
My problem is that I don't trust Mozilla's ability to keep this data secure. I believe that sooner or later they are going to get hacked, and that data will leak. The same might happen to Google, but I also believe that no other company has the degree of expertise of Google to protect that data.
Am I wrong in this assumption? Does Firefox Sync end-to-end encrypt the data, without knowing the key, like Google's Sync Passphrase feature?
What are your experiences with Firefox Sync? Does it work just as good as Chrome's, or even better?
I've had a pretty good experience with Firefox Sync, although I don't use it for passwords. Firefox Sync has E2E encryption to ensure that Mozilla doesn't have the ability to view any of your data.
Tl;Dr: Extremely satisfied and using on three devices.
The sync server is open source and free, so you can host an instance yourself if you'd like.
Firefox accounts have 2FA support, and passwords are end to end synced anyway, so even if your Firefox account is compromised, nobody can recover data without the key.
For about a decade in the working, there were zero breaches in mozilla AFAIK.
I use Firefox sync in my Windows laptop, Firefox on Android beta. There is also an app called Lockwise that can work as a standalone app and a password fill feature for other apps as well.
The only thing that keeps me using Chrome from time time is the in-place translation feature.
If anything comparable was added to Firefox (which I mainly use), I would be more than happy to get rid of Chrome once and for all.
And yes, I'm aware of the extensions that offer similar functionality, but unfortunately they still have some way to go before they can reach parity with Chrome translator.
In-place translating is the feature that really missed on other browsers. Edge also works well with Bing translate. I wish DeepL provides translate extension.
What do I, as an end user, have to do to be protected? Is it sufficient to use Firefox with its default settings?
Honestly I don't know and I think I should. I have uBlock Origin, Privacy Badger, ClearURLs installed on Firefox, I'm running pi-hole at home, it's just so much.
Don't sweat it. All you really need is Firefox + uBlock Origin. And even without uBo, Firefox blocks some trackers by default.
Privacy badger is largely useless ever since they got rid of heuristics. ClearURLs is useful, but you'd probably be fine without it. And pi-hole doesn't block anything that uBo doesn't in Firefox (but is still useful for applications outside of the browser).
On the other hand, maybe you're like me and want to squeeze as much privacy out of your browser as you can, even if it means breaking some websites. If that's the case, check this website out. Just remember that the tweaks listed here are nice, but not entirely necessary.
With uBO, I would also disable things like Third-Party Cookies. I also have No-Script, but that's mainly for making sites easier to load (Though it does block ad-tracking js-files, like uBO).
> I'm a huge proponent of Firefox on desktop, but the new Firefox on mobile is just awful awful awful.
I strongly disagree. There are certainly issues, many of which are a result of the recent redesign, but I still find it a much better experience than Chrome on mobile and I think calling it "awful" is hyperbolic. Here are some examples of why I think FF > Chrome on mobile:
Firefox mobile supports extensions which I consider necessary at this point, such as uBlock Origin.
I can put the address bar at the bottom, where my fingers are.
The reader features makes many websites much easier to read - particularly on mobile.
Chrome defaults to opening things in tab groups now, which I find to be much more finicky to use than normal tabs. Bookmarks are for saving pages long-term, not tabs.
I'm with you. I preferred the previous version of Firefox on Android. Since switching to the new version:
- I've noticed it crashes much more.
- It still doesn't support all the extensions I used to have, like uMatrix.
- All my bookmarks disappeared when it updated to the new version. I know syncing bookmarks would've let me recover, but I didn't realize it'd happen on the first place. And it seems like an easy problem to Amos even if a user didn't sync.
I've not noticed a crash in the several months I've been using the new version. Have you tried the usual things like clearing cache/data, reinstalling, etc? It took me a while to get used to it (especially the move of the address bar to the bottom) but I'm quite happy with it now. It also supports uBO which blocks pretty much all the ads. I agree it's disappointing what they have done with extensions though. Syncing to Firefox on my laptop is quite good though and very useful for looking up history e.g if I remember finding a good website I don't have to worry about recalling whether I was using my mobile or my laptop when I found it. All my history across all devices are there so I'll easily find whatever it is I was looking for.
Google's FLOC has an unfixable problem. As soon as other advertisers create their own FLOC's, anonymity goes away. No matter how careful Google is to make sure these ID's aren't unique, as soon as users have several FLOC identifiers, maybe even two, they're uniquely identifiable.
Behavioral tracking needs to die. It was a mistake created from lack of web security in the early days, nothing more. It's a bug, not a feature.
Google is finally showing us what Chrome was meant to be. A browser monopoly to defend Google's user tracking interests.
Privacy invasion and tracking is built into everything Google does. It's part of their DNA. No real need to look for details, if their name is on it, you know it's in there somewhere.
Making money off of you is their DNA; how they do it can change, and if they could make the same money or more (long-term) without actually storing advertising profiles, you bet they would.
Umm... IP address + FLOC is enough to track people behind NAT... Enough to track after IPv6 address change.. (same subnet + same floc = same person)
Even if FLOC changes you just link a new floc to old IP, if you never see the old floc and you start seeing new one you have a transition and continue tracking. (not all will change at the same time i assume)
This thing fixes cookies... they would be obsolete... It would allpw an ad network to track you much better.
The browser provides FLOC IDs. How do you think other advertisers convince browser vendors (and particularly Google) to include support for their FLOC's?
Antitrust most likely. FLOC + anything else is probably identifying too. If there's a couple thousand FLOC ids, you only need one more identifier with that level of specificity to form a unique identifier. IP alone might be enough
Isn't Google doing the same or less based on only browser data? How does each solution differ in entropy? I didn't get that from Apple's policy page. I am genuinely trying to understand and this isn't a snarky comment.
So Apple can do segmentation aka user tracking at a group level because it doesn't expose the group IDs. Yet, we as consumers don't have transparency into the segments we inhibit. I am not sold that Apple's segment solution truly respects our privacy compared to solutions that don't rely on user identity or store segments.
I'm not sure I understand it. Sure, if a website knows the floc of a user on multiple weeks they can presumably use a third party service for identification.
But how does the website initially join the different floc ids, unless they have already identified the user?
Thanks for the Arch link, but seems outdated (lists Midori under WebKit still). And the warning sounds a bit ominous too: what is an up to date and secure WebKit browser?
Google must have seen this coming. It was never going to be the privacy savior Google billed it as, so why push forward with the concept? We have to look deeper to understand what value FLOC provides to Google. They can exclusively gather tracking info through the browser they control, and they can weaken competitor’s privacy arguments by claiming that they do not track individuals.
The worst part is actually, it will never look like "just be Google", because that would be too obviously evil and be subject to decisive legislative action.
Google's Widevine (streaming media DRM) is a great correlate to this. If you wanted to try and create a great, novel 4th web engine/browser; good luck. Many of the major streaming sites use Widevine. You can't build a browser to stream that content without asking for access to Widevine encryption. Google will not give it to you; they may, eventually, if you build up enough of a userbase, but what browser would be able to build up that userbase without access to streaming media?
Its less about building a bulwark around Google's technology, a clear monopoly, and moreso a bulwark around the Boys Club of Established Big Tech. Then Google can go to Congress and say "we have competition, look, Facebook serves ads".
Also we have to ask the question: does DRM like Widevine even work? One could just take a video recording of their Netflix stream using OBS or something similar, and Widevine can't even do anything to counter it.
Yes, you just don't understand what working is. Everyone realizes you can do screen recording, HDMI recording, or just invite a friend over to watch on your screen. What it does do is make the content owners comfortable enough that there is a reasonable level of protection as to allow their content to be streamed online.
They were similarly pushy about Manifest V3, AMP, etc. I suppose anything they can do that creates more of a gap between their tracking abilities and other people's is a really core way to boost revenue. Shareholders really want to hang onto the history of strong double-digit percentage YoY gains.
>It was never going to be the privacy savior Google billed it as, so why push forward with the concept?
Because these users are still anonymous to companies using Google services. Uniquely identifying users, and the liability for doing so, falls to intermediary services. I expect it will be the domain of data brokers like LiveRamp, Epsilon, and others.
"Use Google and be compliant" is a good sales tool and good value for companies that use Google services. Companies that don't want to sell data to brokers will stick with Google.
The number of companies that want to sell to brokers is rapidly increasing though - basically all retail wants to, or spins off a BI division that wants to. They hired all those data scientists, gotta find something for them to do…
Data scientists would rather buy data to work with big data sets than sell their own data for money. It's the marketers and people with P&L obligations that usually want to sell.
I think it’s potentially even worse than this. We seem to have to re-learn this lesson periodically: seemingly anonymous data about groups of people does confer the ability to identify individuals:
Privacy is dead. Once they have ubiquitous cameras everywhere, and connect the databases, the AI can correlate everything you do, and infer who is meeting whom and for what etc.
Similarly online. You are going to get deanonymized unless you go to great lengths to change everything about what you do, including not doing anything in real time.
Is it possible to be anonymous online and still engage in the "online world" as most non-tech folks see it? Increasingly I think the answer is no, without substantial tradeoffs.
It comes down to what you want in terms of anonymity. You can’t anonymously order food from an app and have it sent to your house while posting your wedding photos on Facebook.
But, if you want to anonymously browse the web and talk to people on HN then that’s still possible.
People have evaded state level actors for years. Perhaps most famously Obama Bin Laden. Which means true anonymity is still possible.
Sure, not fucking up for years, using the same identity, and communicating with people while being hunted ups the difficulty, but still doesn’t make it impossible.
> In 2006, the internet company AOL released a large amount of user search requests to the public. AOL did not identify users in the report, but personally identifiable information was present in many of the queries. This allowed some users to be identified by their search queries, prominently a woman named Thelma Arnold.
Cohorts are sized at "a few thousands" (what does that even mean?).
There is a lot a heuristic information retrievable using JS. This is separate from the information Cohorts use to group you.
Put both together and you have something quite close to a unique id.
There is absolutely no way to fix this problem while having cohort id's and not having very very large cohorts. Which I can't see google using.
Just as an example, I have a unusual setup so `coveryourtracks.eff.org` reports that my fingerprint is unique in the 292,340 tested in the last 45 days from heuristics alone.
Thinks are not that bad for the average windows or mac user (me: Linux, Firefox, 1440p screen, etc. I'm not surprised tbh.). Still combined that "not so bad" with a FLoC Id and you are back at basically unequally identifiable.
EDIT: Btw. there IS a fix, instead of letting advertisers decide on the ad based on you FLoC Id you let your Browser decide based an "available ad topic channels" (if combined with a fixed set of lables and a few other thinks, it's not trivial).
Google's proposed solution to this is an "entropy budget". If you have already asked about other JS things that can be used for identification, you won't get a floc id.
You could always ask first for a FLoC Id and then ask for the JS things, it's a dynamic language and there are always corss-domain redirects with tracing in url Id's.
Also some of them are very fundamental parts which are not at all thinks "you need to ask for". Like Chromium is a serve offender when it comes to accidentally providing unnecessary identifiable information. E.g. my Chromium user agent string is more identifiable then the Firefox one, the canvas fingerprints are way worse. There are additional attack vectors like list of plugins, some with names containing way to much information.
Also just combining things like Language + TimeZone + User Agent are probably enough to narrow down a FLoC group from multiple thousand to just a view hundred or even less users if you are not in the US/China/India.
(Or if you limit yourself to HTTP headers: user agent header + accept lang header + accept header + accept encoding header).
Also don't forget you have a fixed Ip address as long as you don't to VPN perma-bouncing.
IMHO all the "fixes" are trying to fix a massive hole with a few thin sheets of paper, i.e. at best it will look fixed, for a short moment. But it's not fixing anything.
Lastly this doesn't change the point that this basically makes sure Google stays in it's pseudo monopoly position. Tbh. independent of privacy this should be shut down by courts handling problematic monopolies.
The entropy budget is also supposed to cover other things, like user agents or canvas finger prints, such that if you've already asked for too much stuff, you're not getting those either.
I'm also not sure if it cna really work, but if we can't remove all identifiable information from the browsers, it seems like a good idea.
I don't know how well ip tracking works. Can you track most people on ip alone?
They should look at the way the wind is blowing and enable this by default for all domains.
edit: in a previous version of this comment I said that Cloudflare should use this mechanism to "kill FLoC in the crib", which is quoted in southerntofu's reply.
I find it worrying that a huge company is pushing an opt-out privacy-hostile feature (you have to send a header so that hopefully they will disable it, if they are in good faith) and the best we can do to fight it is to ask another huge corporation to "kill it in the crib".
Maybe its finally time we stopped using these corporations and their products once and for all and started empowering our own communities instead?
Respectfully, I don’t think your category of “we” is as universal as you think. Privacy-focused people can and largely do use browsers which simply refuse to send this kind of potentially sensitive information; for the rest of us, this new feature is substantially less privacy-hostile than what it’s replacing.
This is definitely worse than the fingerprinting being replaced, because whereas the old methods were inadvertently using browser traits unrelated to user behavior for tracking, this is an intentional feature for user tracking related intentionally to user interests.
You don’t have to, what happens between a user and their browser is theoretically none of your business. But if you care about your users’ privacy, I see no reason not to send this header as there’s no defined value for you as a business (unless you plan to somehow try to retarget users who’ve visited your site based on guessing which cohort that potentially refers to).
The user opts in to being placed into a cohort. The site opts out of providing information to Google to let them generate cohorts based on the site. There’s no overlap.
I recently implemented all these do not track headers that exist in my companies applications. I hope more devs consider doing the same. You can still get valuable analytics without tying identifying information to every request
There is a whole field (now relatively mainstream) of differential privacy, concerned with answering questions such as "can I be correlated and de-anonymized across queries" (query might be "what's your current cohort id?").
Is FLoC not built on sound principles of differential privacy? That would be a big shame on Google.
EDIT: Huge shame on Google! From their FLoC whitepaper: "We want to emphasize that, even though differential privacy is now the de facto privacy notion in industry and academia, we decided against using it as our privacy measure for building audiences."
Differential privacy is useful for training or updating a public model where individuals' features should be kept private.
In floc's case the model is public but isn't being trained on individual's features in realtime, only used for inference as far as the proposal says, e.g. the proof of concept stage will develop a fixed model that all browser instances (of a given vendor) share. Individuals' features are kept private to the extent that the model output can't be effectively reverse-engineered.
Differential privacy probably also won't be useful in the POC stage because the training will require accurate labels which defeats privacy.
differential privacy is good for answering population questions like "how many people in my dataset have property x?". it's a lot less clear how to apply it to something as granular as serving personalized ads. and as the example demonstrates, this compounds if you're doing it repeatedly with data that keeps getting updated. to the best of my knowledge, "differentially private personalized ads" is a hard problem, and maybe just a contradiction in terms.
I think it’s Google’s responsibility to make it clear, though, either by putting in the theoretical work to apply differential privacy or proposing a refinement of the concept that allows them to. It’s like those people who propose grand new theories of physics without using any math; if you can’t connect your ideas to what’s come before, people will be rightfully suspicious whether they’re built on quicksand.
This FloC initiative just needs to be shot down hard. It's only meant to allow Google to continue business as usual in the face of privacy regulations. Everything else including privacy is secondary.
I think floc will be useful because I'll hardcode a very inaccurate cohort in my browser to get amusingly meaningless ads that are as unobtrusive as possible.
From what I've seen the most unobtrusive ads are the most expensive methothelioma and personal injury ads since they're generally a short message on a solid color background.
Yesterday a video conferencing web application (that I had to use) refused to work with Firefox, saying that it did not meet minimum requirements, and that I needed to use Chrome or Safari instead. I'm curious whether there is an actual technical justification.
There could actually be technical reasons behind it, for example Jitsi had this bug open for quite a while: https://github.com/jitsi/jitsi-meet/issues/4758 referring a few Firefox bugs - apparently mostly fixed by updates to Firefox; perhaps the vendors of your the web app you used didn't get the news yet.
one thing i haven't been able to understand?: if each cohort group is so small (relatively i think in the thousands) combine with a UA should be 100% unique?
even if cohort is in the millions a UA+ip or geo should be enough to ID, or even add a couple more bits of window.property entropy enough to stay under the 'budget' limit
Wouldn’t it be much simpler and less invasive to have a system where the browser user chooses a few interests from a fixed set? constituting only a few bits of entropy for ads (I wear men’s clothing and I like ice hockey and cooking) and that’s it?
The browser can tell any site this data and it’s a small enough number of bits that I’m not uniquely identifiable even when geography is added.
As I understand it, the attack here is that the user in question has an account on site A, and site A is able to share the user's cohort IDs with other websites and this allows the creation of a unique tracking profile across all websites over time
How so? That is really non-obvious to me. If site A associates user X with IDs 1,2, and 3 over three weeks, how does that help site B that only sees the IDs? When B sees ID 3, without any further unique identifier, they won't know that the same device came with ID 1 and 2 beforehand.
If site B also had an account and both sites would work together, they could simply compare the email address.
This is really disappointing. They failed to address the very basic privacy requirement given that this billed as privacy tech. Apple tackles this head on when they say the GUID is per app precisely to ensure users cannot be tracked across apps.
This tells you where google's priorities are, not that it was in question before, but it just makes it clearer.
I've been pretty much given up on privacy. Not to say that it shouldn't be pursued, but I think more effort should be put into security and stakeholders who are honest and won't abuse your data to begin with. At the end of the day I do not believe a trustless environment is sustainable.
Kinda sucks for google that within 2 months of them beginning their trial it’s already got a million holes in it.
Do you think they keep going bec they don’t actually care about the privacy implications or do you think they try to “legislate” their way out of it by adding something to the EULA of the FLoC program that you can’t share IDs. So they can say “see we don’t allow it” and can pretend no one is gonna do it behind their back.
I thought FLoC was supposed to become yet another DOM API that any Javascript of any web page you visit can access (if Google got their way). Where would there even be an EULA to sign?
That’s how you find holes, by running trials. Remember that this isn’t a privacy regression; they’re trying to find an ad-friendly replacement for third party cookies, which can do cross-site tracking without any need for holes.
It’s a regression bec they’re taking control away from the users, you can decide to not allow 3rd party cookies, but you can’t opt out of FLoC very easily.
I did a lot of work on privacy coins, and the power of statistics is staggering. Doesn't matter if you shield yourself by grouping with 100,000 people per transaction, if your anonymity set isn't _everyone_, eventually you can be singly identified.
Same goes for browsers, tracking, and "anonymized data".