Hacker News new | past | comments | ask | show | jobs | submit login

It would be amazing if you could build and send fake profiles of this information to create fake browser fingerprints and help track the trackers. Similarly, creating a lot of random noise here may help hide the true signal, or at least make their job a lot harder.





Unfortunately fingerprinting prevention/resistance tactics become a readily identifiable signal unto themselves. I.e., the 'random noise' becomes fingerprintable if not widely utilized.

Everyone would need to be generating the same 'random noise' for any such tactics to be truly effective.


A sufficient number of people would need to, not everyone. And if I were the only one then tracking companies wouldn't adjust for just me. Basically, if this were to catch on then ad trackers wouldn't adjust until there was enough traffic for it to work. Also, that doesn't negate the ability to use this to create fake credentials that aids in tracking ads back to their source.

They don't need to adjust.

Here's a real-life example: You show up alone at the airport with a full-face mask and gray coveralls. You are perfectly hidden. But you are the only such hidden person, and there is still old cam footage of you in the airport parking lot, putting on the clothes. The surveillance team can let you act anonymous all you want. They still know who you are, because your disguise IS the unique fingerprint.

Now the scenario you're shooting for here is:

10 people are now walking around the airport in full-face masks and gray coveralls. You think, "well now they DO NOT know if it's ME, or some terrorist, or some random other guy from HN!"

But really, they still have this super-specific fingerprint (there are still less than 1 person in a million with this disguise) and all they need is ONE identifying characteristic (you're taller than the other masked people, maybe) to know who's who.

They didn't need to adjust their system one bit.


It's kind of how people used to make fun of the CIA types and "undercover" operatives.

Look for the guy wearing a conspicuously plain leather jacket and baseball cap. "Why hello there average looking stranger I've never met. Psss, 'tis a fair day, but it'll be lovelier this evening.'" "Oh ... it's Murphy the spy you want."

Also, found out the CIA declassified a bunch of jokes several years back in searching to respond. [1] Most are already dead links on CIA.gov, yet there's a few remaining. Nother one on people commenting on the CIA. [2] "These types are swin- Ask in Langley if they work for the CIA. Every- Ask in Langley. They will tells one knows them." 'You, it's the big building behind.'

[1] https://nationalpost.com/news/the-cia-has-declassified-a-bun...

[2] https://www.cia.gov/readingroom/document/cia-rdp75-00149r000...


The garbage in the last sentence of this comment is due to the second link including incorrectly OCR'd text from an image of a newspaper using a two column layout. Both links are very amusing.

I think this is a slightly different case no? If the ad network is using a very high precision variable to soft-link anonymized accounts, then randomizing the values between apps should break that.

Your analogy applies more to things like trying to anonymize your traffic with Tor, where using such an anonymizer flags your IP as doing something weird vs other users. I’m not convinced simply fuzzing the values would be detectable, assuming you pick values that other real users could pick.


I'm sure the ad networks do a lot more than use high precision variables for soft-linking.

These are professional networks with a ton of capital thrown behind them. They have pretty decent algorithms, heuristics, etc; and you don't make money (compared to the other data correlation teams) if you do simple dumb stuff. I'm certain they take into account those trying to be privacy-conscious, if only to increase their match rates to be competitive.


Swapping fingerprint details is different than your example since it happens immediately and out of view. You could change fingerprints very often/create a new set for every browser tab. Additionally, as I pointed out before, they won't adjust until there is enough usage and when there is enough usage then the random settings are hard to distinguish because it isn't 1 in 1m. I get that they will keep trying to track down things that make browsing specific, but that is what updates are for. We need to at least make it hard.

That's why it should be the browsers & OS's that enforce such privacy measures... it shouldn't be an option that my Grandma needs to enable...

Unfortunately the fox is building the hen-house. They 'should' build products that improve my experience but they have very little incentive to do that when they get paid so much for the data they can extract. What would actually do it is regulations similar to financial regulations. OS/browser companies shouldn't be allowed to do business with data brokers. Then they would have one primary customer, the consumer, and competition would focus on the correct outcome. But 'regulation' is an evil word so we aren't likely to see anything like that actually happen.



Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: