At some point companies like Fast Company have to accept that they are a significant part of the problem, and just writing articles about it is not enough. This page contains a ridiculous number of trackers. The site pulls in content from the following 3rd-party domains:
images.fastcompany.net is probably just used to serve static content from a cookieless domain and some of the others are CDNs for libraries, but the rest are all there for tracking purposes.
If you don't like what the ad industry is doing with people's personal data, please don't keep giving them ways to get to it.
This point was made recently about the NY Times having an editorial about advertising / tracking being invasive, while also having advertising / tracking on their own page.
The justification / defence given was that you want separation between editorial and business decisions. If the two are aligned, the freedom of the editorial team to write what they want will diminish, and the business will ultimately dictate what gets written.
If it feels a bit pessimistic, I agree, but pessimistically, I also feel like it's just realistic.
As their long form journalism with moving graphics and images is so popular and gets so many clicks, think how awe inspiring and what a major effect it would have to shut down all of that stuff on articles talking about that stuff, just to give people a dose of what it could/would be like. It would be respected, get other articles about it, etc.
There is a difference between a news platform publishing a news article or op ed story that denounces things the host company itself might be doing and a corporate PR office issuing a statement.
Is google hypocritical for having flat earth stuff on youtube and running huge satellite imaging operation?
(Edit: I should probably have replied to the "hypocritical" comment below :))
They also have the distinction of being the first page I've ever run across where temporarily enabling javascript on their domain causes the page not to load, whereas leaving it off does make it load. Makes "forbid fastcompany.com" an easy choice.
This is surprisingly common. They keep the content on the page in order to make it available for crawlers, and then use JS to hide the content until it's determined you don't have an ad-blocker, or you've accepted some draconian terms, or whatnot.
Better to leave JS off if the site works without it. :)
The problem of this line of thinking is that it assumes every company is just one guy in a basement who's responsible for all facets of all operations. In the real world, an editorial department doesn't get to dictate what technology gets adopted by the SEO department, or how accessibility is implemented by the software development team.
It's easy to say "well company X should really simply do [Y] to fix [morally good cause]", but in reality that ends up being non-actionable because it doesn't take into account that due to bureaucracies, some things are easier said than done.
Another thing that I think is somewhat counter-productive is criticizing a news outlet for doing things that are at odds with what they write. News outlets are not about leading world changing initiatives by example, they are about spreading information so others become conscious of issues they might otherwise not think about. I feel that saying "well I see you don't put your money where your mouth is" somewhat weakens the importance of the point being made (i.e. if it isn't being done, then it must not be that important)
What are they doing with all this data tracking because it sure isn't giving me relevant ads. In addition to the often noted situation where you buy something online and then get ads for that thing you already bought, I routinely see totally irrelevant ads and recommendations.
I've been on Facebook since around 2003. I've had an Amazon account for even longer. Their ads and recommendations for me are terrible.
Two recent examples from facebook: they recommended I join a group for progressive christian asian-americans. I am not christian or of asian descent. More recently they gave me an ad for maximizing tax right offs for nannies. I have no children and make way too little to even consider hiring a nanny.
Facebook has my picture. They know my relationship status. They have the text of all my posts where I have never mentioned children. They even know my occupation. Whatever they are doing with this data isn't working.
Facebook's real sin is a garbage UI meant to confuse advertisers into spending money on audiences that don't align with the advertiser. Google is much, much worse with this.
The bid system means that the Christian Asians didn't do a good targeting job, and nobody else found you valuable enough to bid on, so the Christian Asians won the bid. You got served a penny ad because nobody else wanted to show you one.
The user was lumped into a poorly-constructed lookalike audience because of some stretched interest similarities.
What you're experiencing is the confluence of two distinct issues:
1. The CTRs on digital advertising are dropping YoY. People DO NOT click on ads, even if they are saliently targeted. There are enough trash ads out there, there's so much malware and malfeasance in the ecosystem, that it's worthwhile for users to at best _ignore_ and at worst (for advertisers) _to block ads totally_.
2. The desire to target people that match specific criteria is rising because of misguided notions throughout the advertising and marketing industry. This has driven the entire industry to push machine learned modeling to find each and every set of eyeballs out there to try and justify spend, instead of offering worthwhile and valuable offers that compel consumers to interact. You have entire systems constructed on the notion that the more people you get an ad in front of, the more valuable it is, and agencies and trading desk operators that are looking at this as the sole KPI to focus on, to the detriment of everyone involved.
> There are enough trash ads out there, there's so much malware and malfeasance in the ecosystem, that it's worthwhile for users to at best _ignore_ and at worst (for advertisers) _to block ads totally_.
Having spent as much time on the Internet as I have, I miss those optimistic early years when this wasn't yet the case, or at least, it wasn't as bad is it is now. Oh, there were junk BS ads in the early public Internet (e.g. "Shock the monkey and win a free Peter Gabriel CD!"). But, back in the late 90s and early 00s, at least some of the ads led to something interesting, entertaining, or useful, and thus were worth the click. Either that, or I was a lot more naive and trusting than I am now. Perhaps both.
It sounds like what you're saying is that online ads are becoming less and less effective due to customers getting trained to ignore/block ads. However, companies are responding by attempting to improve their ad targeting instead of facing the reality that online ads aren't effective anymore.
What will happen in five years when advertisers have to face the reality through numbers that online ads don't work anymore?
The other day I came across a headline where Facebook announced that they removed 3 billion fake accounts. It made me wonder whether advertisers were paying to show ads to those accounts.
My view is that the bubble is bursting from the side of the advertisers realizing the in/effectiveness of ads; and from the side of the "advertisees", people generally becoming aware that ads are a security/privacy issue, and ignoring or blocking them altogether.
Relative to the amount of data tracking (theft of privacy) that goes on, most ads are shockingly irrelevant. It's about time the bubble popped.
I work in Ad Tech. This is on target. [..] People DO NOT click on ads
Was there ever a serious belief that people clicked on ads willingly and intentionally?
I mean a real serious "people want to click ads", and not a "of course people click on ads which are relevant and helpful - as all adverts always are, nudge nudge wink wink"?
A friend of mine worked for Acxiom, and when they looked themselves up they found 6-7 different profiles, all fantastically wrong, some clearly mixed with information from other people.
Admittedly it's a very private person that I'm talking about who deliberately obscures themselves by leaving incorrect or misspelled information when forced to give any information, and who moves, switches jobs often, has an odd name, and changed that odd name through marriage. But I think that a lot of us are falling for the marketing of these intelligence companies, rather than judging them by predictive effectiveness because we don't have any access to that.
There's no real theory of mind behind all of this stuff; it seems to be that it will only ultimately be effective for creating black/whitelists where you don't care about a lot of false positives.
> A friend of mine worked for Acxiom, and when they looked themselves up they found 6-7 different profiles, all fantastically wrong, some clearly mixed with information from other people.
With my tin foil hat on tight, maybe they create shitty profiles on all their employees so they don't get freaked out and run for the hills!
I assume in terms of "how did they know this about me?" a la Tom Cruise being advertised a particular brand of alcohol based on an automatic eye-scan whilst walking through a public space ["Minority Report"].
I used to be "creeped out" about this about 10 years ago. Now-a-days I am not.
To be honest, a part of me wants this type of thing. I'm all for exploring knew products and "discovery" and so forth. However, if the "Check out this product you just bought" problem could be solved, I'd be chips in for "Hi User, check out this neighbouring product" should that actually be relevant; Amazon is almost OK for example, but that's an "almost ok".
Extreme Example: If I don't drive and have less than zero interest in sports cars, why would or should I get ads for a Ferrari? Today, I have no doubt I'd receive such an ad for somebody's giggles on the penny. Tomorrow, I dream not to.
I wonder how much of this is a scam on the advertisers too, sure the platform might have great detail for targeting ads, but if someone comes along and outbids on generic terms it isn’t going to do you much good as an advertiser either.
I used to lead the product team for a publisher that makes the majority of its income from ads. It was a constant fight between the commercial teams and us.
Commercial wanted to stuff every page with as many ads and trackers as possible. We wanted to ensure the user didn’t have a miserable experience. As the company liked getting money, we, and the user, generally lost.
I kept an instance of Chrome blocker-free to see the site in all its ad-stuffed glory, but after learning a bit about how things work, I block all the things with extreme prejudice for personal browsing. And ensure friends & family do too.
I think tracking page clicks is completely different to invasive data profiling. Anything that could theoretically be tracked server-side (like a link click) is fair game. The only reason it's done client side is easy implementation.
When a site is littered with ads, the content probably sucks, too.
Could the ads make the internet more usable? For example, they help evaluate the content before reading it and leading one to find better quality content sooner.
> I imagine you have some role in building the internet since you're on HN.
I do. I sit in a tiny corner where we just build websites without all that JS and tracking bullshit. (That's not to say that I never use JS, I just use it very sparingly.)
It's just a different philosophy. I don't need to know how many people exactly listen to my podcast. It's much more rewarding to me when I meet random people and they say "Oh you're that guy? I loved your last episode." That's 100 times more awesome than seeing a download counter increment.
Watched some Youtube yesterday, it kept showing me the same freaking ad for like 20 times in a row. Sometimes I get the same 2-3 ads for weeks at a time. Not sure what's the idea, if you're going to show ads at least make it a barable experience, it's almost as if they're trying to trigger some breaking point.
Same experience, it's not clear if it's just one advertiser blowing a lot of cash at once, or Google deciding I really need whatever product, but man oh man when I hit one of those ad loops it puts me off of that product forever.
The most recent was Grammarly, for about a month straight 80% of the video ads Youtube showed me were the same two Grammarly ads going back and forth. I'm not sure why Google decided that I really need to fix my grammar... but I know sure as shit I'll never have anything to do with Grammarly, ever.
No but if you spend a bit of time you can finesse a system. I’ve been not using Facebook for a while but a few years ago I noticed they had “likes” and “see more of this” so I liked as much Cthulhu related ads as I could find and my ads pretty quickly looked like they were being served from carcosa.
I suspect that we will not ever see all this information actually give us better ads. The people in charge of this sort of stuff typically just want to get more clients so they are happy to inflate. The problem I see is that there is a whole lot of personal data being sent around all over the place to fly by night companies with no incentive to protect it.
It amazes me that this story doesn't get more press.
The press thinks that Outbrain and Tabooleh are such geniuses, but they show the same ads they've shown for five years that everyone I know has learned not to click on. (e.g. if you do click on one promising to see some kind of images the one image you won't see when you click is what is promised.)
Easy, it just means advertising doesnt cover everything. The fact that they know so much about you is valuable in itself and they can sell access to that data, for example... to cambridge analytica. Advertising is just one of the ways in which corporations exploit information to manipulate people’s behavior.
Please elaborate on how i've been misinformed in this respect. I've gotten downvoted a bit without much feedback. Also, since the distinction matters, I would like to point out that I claimed selling "access to" that data, not selling the data directly.
Facebook makes the data available to advertisers who are then responsible for targeting you well based on that data. If the advertiser makes terrible targeting decisions but offers a high bid for a click, then despite your low CTR and low conversion rate, you'll see the ad. Facebook is not choosing which ads you see. (The recommendations, sure.)
I think the technical term for these ads that stalk you after you have bought the product is 'remarketing'.
Amazon should know what they are doing but they do the ads of what you have just bought, even though they have good information. No idea what their excuse is for fluffing it.
However, with small companies, e.g. a clothes shop, they don't do the website necessarily themselves, getting an agency to do it. When they get to a certain size they bring this work on house, but the principles of cock up are the same, in the agency or in house.
The agency doing the website have extra add on services for SEO, remarketing and other marketing snake oil. The people in these jobs are not the sort that can do database joins. They use products that have been marketed at them and generally work on second hand hearsay to determine what works, even though they do use graphs and data.
They are not necessarily going to know the products and even though they buy adverts they do not see themselves as in the advertising game, more in the marketing field.
The internet has raised everyone's boats over the last few years so these people can have some legit upward pointing graphs, what is less clear is whether it is demand, price or product that has got their clients online sales up. However, they can claim it is all their work and that the campaigns they have put on have changed something assumed to be static.
On this snake oil they can attract new clients to bring them marketing spend. They try their hardest but they are not data power users. I know that is tarring a whole industry with the same brush, but the people in this industry will be 'power Excel users' rather think to become 'novice SQL users'.
To be honest logging in to some of these Facebook account screens is quite hard work, going through the right channels to get the login details from the client can take all day. Uploading data probably needs help from a developer on a different team, plus you have to go home at 5.30 on the dot.
People that are good at selling product believe in customer service, word of mouth, product and a multitude of things that are a million miles away from remarketing. So the industry of online ad sales invariably has hit and miss results because the people in the business are unlikely to be that smart.
It seems like a fundamentally difficult problem to know when you've already bought something.
Obviously, it's solvable in simple cases, like when you browse and buy from the same vendor. But in other cases, it's not, like if you browse around multiple places and buy from one, or if you look around online and buy in person.
Basically, the information that you're interested in buying is going to spread around to certain places, and the information that you've bought is also going to spread around, but there is nothing to guarantee that it will percolate to all the same places.
You also have to consider the trade-off an advertiser faces. They'd probably rather show an irrelevant (already bought) ad than miss a chance to show a relevant one to someone who is still actively shopping. So they'd probably choose to err on that side.
Amazon knows. Google knows too, if your Amazon emails went to your Gmail account. Your credit card company has a good idea of who you bought from, but not what.
I am curious given that I have an Ad-blocker for most websites and am not on FB. How do you even see ads? Have FB et al found a workaround for ad-blockers?
Thanks, this looks great. I put pi-hole on a vps I use, but once I started tinkering with it; it broke.
It's a nice project, but seems fragile. I'm going to try adguard home. I like that it has the ability to control access, so I feel safer running it in the cloud.
For Android there's a FOSS alternative (no root needed) - DNS66, alternatively if one has root there's always AdAway.
The only thing that stops working are monetization schemes in games - "watch this ad to double coins!", etc.
Browser-wise - Firefox on Android has working desktop extensions and it's glorious.
When it comes to native adverts (like timeline stuff in Instagram or Facebook) I just stopped using these platforms on my phone. Less time wasted on mindless scrolling.
I yet need to find a good way to block ads in podcasts. Skipping takes effort and I don't have my hands available for fiddling with the headphone remote.
Facebook spends a lot of time making it hard to block ads on their site. I have turned off adblocking for the site because it is often ineffective for me and sometimes breaks things on the site.
Facebook puts a lot of time and effort into obfuscation of course code. They generate random classes and IDs and interweave containers (divs, spans, custom HTML elements) on each page load so that software can't tell what is content and what is an ad.
What an absolute shame that they waste the time of all their highly-skilled developers on garbage like that. The user has specifically indicated, by running an ad-blocker, that they aren't interested in being marketed to. At least for me, ads that bypass my blockers really irritate me, to the point that whatever company was unfortunate enough to win that bid actually hurts themselves as I will be more likely to select a competitors product.
It is definitely impressive. But it's something that I think could be defeated pretty easily with a pretty simple machine learning model. Could be a fun project!
Just curious, did you go to Harvard with Zuck? That's the only way you could have been on Facebook that long, as he only created the earliest version in fall 2003.
I have yet to see a real whistleblower report by someone deep into the ad industry revealing whats really going on there. There are almost no details about what's really happening on a technical level (beyond the little things we already know), and whether large data sets are abused in the sense that they get routinely de-anonymised.
I worked some in marketing and targeting modeling within retail finance.
From what I saw, the data privacy rules and client contracts are followed scrupulously, and infractions are noted and remediated. Encryption slowed us down at least 10X,and bureaucracy another 10X.
While frustrating at times, I appreciated that most if not all actually wanted to stay well within the legal guardrails.
Of course, this is one person's experience, and obviously can't apply to every company. Genie and cork, toothpaste, and all that.
What many companies do is they sprinkle some magic "anonymous" pixie dust on their data and then tons of laws no longer apply, even if the data is trivially identifiable stuff. And location data is often of that nature.
The will is there sometimes to attempt this, which is why following risk procedures is tied to performance ratings at better companies. Companies also delete data after specified time, as required by numerous legal entities.
And engage with third party matching services so that personally identifying information is salted and hashed until the print shop
Do you know whether there are actual audits happening by third parties?
I've never had anything to do with ad tech, but the (few) certification processes I've been involved with were largely documentation and "yes, we do have backups. no, unauthorized persons cannot access our servers" promises without anybody actually auditing/testing.
Do they have independent auditors regularly look at the tech and operation to verify that they actually do conform with the laws and don't just claim to?
Many. It represents a massive cost. Hence the emphasis on employee incentive to do the right thing in the first place.
Further, any time there is a broad news story (e.g. Wells Fargo's bogus accounts) or legal item in pipeline with broad impact, you can be assured every company looks to make sure they aren't in bad shape.
What the author is referring to is usually branded, so it's difficult for people on the sales side to give details without revealing who it is unless you're in the weeds (technically), and I'm sure you can appreciate why they might not want to, so here's the gist:
- Companies collect a number of "records" keyed to an email address. How they do this varies from running microsites/forms that collect data directly from people, to scraping linkedIn and resumes.
- Data vendors will share this data with each other by hashing the email address. Almost always with md5, and rarely salted.
This means you can enrich data both anonymously (by relying on the fact the email address is unsalted) or in an identifying way (because one of the parties has the real email address). In both cases, it's just a LEFT JOIN.
They'll buy the md5-email-to-cookies from one provider (e.g. Lotame, Liveramp, etc) then use that to onboard email+contact data they've purchased from companies that have email address-to-personal data (e.g. MVF, ZoomInfo, etc).
IF that was done as purely a lead qualification step, it's a good way to take legitimate content syndication done by a third party into direct marketing, but there's no technical reason they need to be leads or have any level of qualification -- and only a weak market force (poor conversion rate) that prevents it from being more widespread.
This comes from discussions and lectures from people working on engineering in some ad aggregstors - at least one said that in the end, they have a table with 300 million IDs for each person in the country and they key all the data they can link to that person with this id. In principle this data is annonymized. But does that make a difference? At least in health care people worry about hipaa and do audits to minimize reidentification risk but I'm not sure if adtech companies do anything like that. So yes, a good data scientist can find any person they want from the data but even otherwise I think these companies can work on a fairly meaningless definition of annonymization to get away with all this crap.
It's basically impossible to anonymize data. There a numerous papers about how little data is enough to uniquely identify people. Things like the zipcode where you start your commute and the zipcode where your commute ends are enough to identify the vast majority of people.
In our case, the postal code (Canadian) and almost any other piece of data is uniquely identifiable. Through a quirk in the layout of our street, my wife and I have the only house in our postal code. Add age, gender, birth month, hair colour, t-shirt size... pretty much anything, and you’ve reduced from 2 possibilities to 1.
I still want to try dropping a letter in a mailbox from a different city with just our postal code written on it and see if it arrives.
It's impossible to anonymize some data. If you're including demographics and locations then yeah, it's going to be hard or impossible to anonymize. If you're using surveys on emotional state or perhaps newsgroup comments? That's not so hard.
> I have yet to see a real whistleblower report by someone deep into the ad industry revealing whats really going on there.
Its because nobody cares until Congress starts hauling in software developers and "data scientists" from tech startups, everyone is content watching Zuckerberg be the face of society's discontent.
As long as they keep asking the wrong people the wrong questions there is no need for everyone else to acknowledge their role in the problem.
Advertising is applied sociology. As such, advertisers want to aggregate large data sets into large segments that are easy to manipulate statistically. (Where the central limit theorem starts working.)
There is no demand for personal data or de-anonymization because that stuff doesn't sell.
The personal data collection is done by Google, Facebook at al not for advertising purposes. They're collecting it because they view it as a resource and a currency in the future de-anonymized world. (Think China's "social capital" except on a larger scale.)
Source: I've worked in the ad industry for over 15 years.
> There is no demand for personal data or de-anonymization because that stuff doesn't sell.
Say what?? I’ve also worked in the ad industry and deanonymized personal data is shared and sold routinely. You speak of statistics and large segments but every advertiser I’ve interacted with is either doing individual-level targeting or striving towards it.
To wit, A few weeks ago there was a discussion here about a method by which you could figure out how fast a browser/machine could compute an SHA 512 hash, and that this was being used to fingerprint users even who had cookies, images, JavaScript disabled.
Hence why technical solutions have been and always will be the wrong approach. If you are worried about privacy, then work to make tracking illegal. That's what this article is doing.
I don't recall the details, but it had something to do with a would-be security feature in the browser that computes the hash of something before following a link.
> every advertiser I’ve interacted with is either doing individual-level targeting or striving towards it.
Only if they're clueless.
For example: Nike really wants a dataset of "people who buy expensive sneakers for fashion purposes".
This dataset is probably hundreds of millions of anonymous people, and not personal data. If there was a way to get this dataset directly, Nike would do that in a heartbeat.
Unfortunately, as of 2019 the only way to get something like this today is by, e.g., crossreferencing credit card purchase info with Twitter browsing logs, which leaks a shitload of sensitive private data.
For ad purposes personal data collection is a bug, not a feature.
There are many sites that have quite accurate PII for large fractions of the American population. Think job boards for instance. One approach that I have seen used successfully is simply buying whatever data such companies are willing to sell.
I’m not necessarily talking about demographics, but rather clickstream data, and anything categorical that you can get your hands on. You join that to your CRM and build a model to predict buyers. A really good predictive dataset for marketing purposes is simply a list of time stamps and names of visited domains. With the right feature engineering, that becomes an excellent proxy for demographic data, current buying appetite, and a whole lot more.
At the end of the day, you don’t even necessarily need to know what the data means as long as it’s predictive. And there are plenty of brokers out there who will let you test their data for free with an agreement to pay if you end up using it at scale. All of this revolves on using PII for matching.
I’m sure what you’re saying is true for some marketers, but there are billions being made on PII keyed data.
Let me repeat again. PII is a crutch used for matching, because current matching/segmenting technologies are crude.
Advertisers don't want PII. What they want is target audiences with predictive power, which means data sets where the central limit theorem holds sway. (I.e., thousands and millions of people lumped together.)
If advertisers could get at these segments directly without PII, they'd do it in a second.
I think in some circles the meaning of the term "whistle-blowing" has drifted enough that people use it interchangeably with "reveal."
Given your 15 years of experience, what resources do you recommend to HN so that we can learn more? Can you give us a "life of an advertising bit," eg: A person visits a website on their phone, that information is accompanied by x data on their phone, goes to the initial ad server, this information is compiled against data from sources a,b,c, etc ...
My question to someone from the ad-industry would be if there are known "intersections" between these anonymised data sets ad-tech is using to sell as much as possible, and companies who buy these data-sets to connect them to real identities.
Especially because the web is full of Privacy notices people agree to, and I guess in some of those people actually agree to have their anonymous browsing data connected to their real identities.
Another very frustrating article about advertising. I don't doubt any of his conclusions, or his moral claims. But, I do want more details. What are the various tracking and correlation mechanisms, what do they look like practically? Does anyone actually really use canvas the way we all fear, or are smartphone apps and Android itself doing all the heavy lifting? Would a user's privacy settings in their phones mitigated any of this? Why or why not?
I appreciate that we know in general what the answers to some of these questions might be, but I'd really like if more of these articles got into specifics.
That the ad industry tries to justify its data harboring with questionable consent is certainly a factor.
Ads mostly suck, that is why people have blockers and why those are so successful.
If I do not want to get tracked or targeted by ads, I have to jump through many hoops that are not manageable by average users. This is why I plainly hate this industry.
I don't really care about your models, while they might provide interesting data, I do not want to be part of it.
Oh, and the ad industry is currently in the process of sanitizing the larger platforms, because you do not want to be associated with losers. It is the great ad industry we are talking about. I don't like them because of that too.
These are the larger issues that prevent useful discussion about specific tracking mechanisms.
Tracking still mainly relies on unique identifiers on cookies and mobile apps.
The main thing I see fingerprinting used for is tracking users between mobile apps and websites. That mainly relies on IP addresses.
The best way to track users is just to ask for information. Ask for a users email address to log in and you have a simple means to track a user across devices.
My understanding is that websites use Canvas to generate a unique or semi-unique fingerprint of your browser, whether or not a cookie has been set. It's absolutely true that this is technically possible. Most people don't have your exact resolution/cpu speed/fonts installed/etc, so you are somewhat unique even if you have the same browser and OS as other people. If you block Canvas, you can actually see some websites request it as you log in. Amazon produces a popup asking permission to do canvas-y things during the login progress on my computer, for instance.
I think the tone of most of the conversations around Canvas I've seen are a bit more catastrophic. Their argument usually goes something like: "Most websites are making UUIDs of your Canvas profile and tracking you everywhere, and therefore your cookie blocking and VPN are useless!" Bear in mind, I don't mean to strawman here, that's just a version of the argument I see most often.
What I personally suspect is that Canvas fingerprinting is used to supplement other tracking or verification. For example, I have a valid amazon account, an Amazon cookie is properly set, and Amazon also checks my Canvas information to make sure nothing looks too out place. ie, my cookie was probably not replayed since my Canvas, IP address, and credentials check out. Presumably, my Canvas information cannot generate a true UUID, but it is something like 1 in 10,000. Enough to use it for additional verification.
Now, is any of this correct? Are folks' most paranoid fears accurate? Is my belief that Canvas is supplementary fraud detection accurate? Whatever answer is correct, it's unlikely to be broadly uniform across all websites. But, the point is, I'd really like to hear from an engineer about how Canvas is used.
This is fear mongering. I work in the ad industry as well - and I can promise you - 99% of the companies in the ad industry are NOT leading the way when it comes to tracking you and your data. They are incompetent and can barely do their jobs. Just take a look at any display advertising for belly fat ads and you know for sure they aren't targeting you. The 1% however (Google, Microsoft, Amazon, Apple, Facebook) ARE tracking you and are damn good at it. Google and Apple being the main offenders here, due to their cell phones. And when you browse around the web you WILL see those ads following you. But as much as it sounds terrifying - there just isn't much of a market for geo tracking individuals. Targeted ads aren't the enemy here anyway - its governments using this data in ways the infringe on our freedoms that is the issue. (Like freedom of press, for example....) And THIS is what we should be worried about.
To be clear: It is not the ad industry we have an issue with here. It is the data collectors - Google - Apple - Facebook. They are the irresponsible parties at fault.
This sounds a little too far in the other direction to me. From working with a handful of advertising platforms interested in the web property that I've worked with for the past 8 years, I agree with your point re: incompetence. However, I disagree that it's _only_ the big boys who are gathering large portfolios of data on individuals that span multiple web properties, apps and spheres of data. It seems to me that your data is part of the marketplace and anyone can play.
Regulations would be nice. A technological solution that fundamentally obfuscates an individual so that they can buy their groceries without being big-brothered would nicer.
>> 99% of the companies in the ad industry are NOT leading the way when it comes to tracking you... The 1% however (Google, Microsoft, Amazon, Apple, Facebook)
This representation by percentages is a bit disingenuous. Yes, numbers-wise you may be correct, but if you consider the companies' enormous resources and amount of influence and impact, Google et al make up the 90+%.
My point was that most ad tech companies are not the issue. It is the big boys which is the issue. And, that ad tech wasn't the issue. Ads using targeting data is a symptom of the problem, not the problem it self. The collecting of the data (by the 1%) is the problem.
I would provide links for several of the claims you make because they just seem not true. You have a very biased perspective, of course you're going to blame the giant platforms if you work in the ad industry.
> It is the data collectors - Google - Apple - Facebook. They are the irresponsible parties at fault.
Like you say, targeted ads (revenue generative purposes) are not the problem. They are in fact neutral-to-beneficial. It's government surveillance that is the problem.
To say that FAG are the problem is wrong. FAG are motivated by revenue, and operate in first-world (corrupt, but only economically corrupt [for our purpose]) regimes. They will absolutely protect their data from misuse. When they misstep, they get backlash and take corrective action. When the make mistakes, they actually learn from them. Not because of the slow but heavy hand of government, but because of highly reactive market pressure.
So we don't have to worry about them, they are not irresponsible. Google and Apple, eg, does not sell your data to the bad guy, not directly. They sell access to demographics that they only know and they have strong internal controls. Google is f'ing crazy the way they protect data, and I mean from internal abuse. Even in the face of GDPR, does your startup do anything to protect PII from internal abuse? No. Yes, FB has sold direct access to "you" but they have learned and will continue to learn and be subject to public/market pressure.
Who we have to worry about are the people that are not on our radar as first-world elite HN readers: governments. Snowden was revelatory that even our own [US] government is dangerous, to a degree that dwarfs anything FAG does.
the thing is, in theory all of this data is pseudonymous, or even anonymous, as the industry creates all of these profiles and does not attach real names to it.
Many many companies in this industry attach real names and email addresses to this data, and even the ones that aim for pseudonymity do it typically in a very weak way (such as an unsalted md5 hash).
Do you have proof? At my previous ad tech employers data was always stripped of personal identifiable information. The legal and policy teams went to great lengths to ensure data was anonymous.
Even government departments trying their very best to anonymise data tend to mess it up, what makes you think an advertising company has any interest in doing the same?
Their entire existence is built upon deanonymising and monetising that data.
Nothing makes me think this. But since officially everything is pseudonymous, how would it be possible to have a website dedicated to showing the data someone has about me?
Privacy International did a piece on this kind of ad-tech data survaillance and they were saying "Here is the shocking details what an ad-company knew about me when I asked them to share everything with me in line with GDPR"
Then I emailed Privacy International, asking: How did they even knew your real name?
To which they replied "Oh, yes, we forgot to add that to the article, I actually gave them my name and my cookies, so they could tie all their cookie IDs to my name"
They also promised to write an article about the implications of pseudonymous data and the possible link to real identities, which they also never did.
I helped implement the GDPR data request flow at a company with segment data.
Some of the big challenges of the system was a) convincing requesters that we couldn’t look them up by name and b) teaching them how to find the things we could look them up by (cookie Id or apple/android advertiser Id).
It was one of the ugly ironies of the system that we had to build a name collection capability when we didn’t want it.
Depends who it is; there was definitely a "my real name" interplay between ebay, facebook and amazon, leaked by facebook. I don't even use a "smart phone."
I mean, even without originating an actual ID, how many bits do you think it takes to uniquely identify you in your geo? It wouldn't take much in the way of resources to track everyone in the US who uses the internet; a few trackers on popular websites, localization services: I'd be shocked if there weren't things like this in existence already for bail bondsmen/private investigator types.
Had a similar experience - worked in a big ad agency who hired a new web-developer for one of our clients. As they are one of the biggest website providers, they could identify most of the people that visited a client's site - it even automatically saved the identified user's profile pic from facebook into their database.
As an engineer, I'm amazed. As a person who doesn't want the person on the other end of every website I visit to know who exactly I am, I feel violated. At this point though, all I feel I can do as a hapless consumer is to desensitize myself to said violation.
I use a VPN, Pi-Hole, Ghostery and Firefox. All of these a relatively recent additions though, so if a website can get my email and that links to an already existing database of all my collected data up to that point, I'm buggered anyway.
Me and a few other people have been pushing for ethical advertising. It doesn't have to be this way.
We meet monthly to give each other advice and help each other out. It's hard, it can take more time, but it's been a great experience, and I hope that this starts to become the norm over time.
I've been trying to do a monthly report on the RH blog, but I'm behind by a couple months. I've actually been thinking more about how to get more info about this. If you have any specific questions, I can try and answer them for you or bring them back to the others.
I'm building a reading game and I want to run simple ethical ads for fantasy novels and live play RPG podcasts and the ilk on the homepage of my app at a later point. I'm at the point in building where I'm initially building my audience and need to advertise that my game exists to the world.
Where else can I buy an ethical ad to find my potential players? My best idea so far has been to purchase an advertisement on some of those podcasts I mentioned, since they are just reading a sentence or three during their normal recording so there is no additional tracking really going on that I'm aware of.
You four would be a great option for reaching other developers. Imagine if you could search for ethical advertisers by category with links to reach out. Does this exist already?
Good, as software engineers, machine learning researchers, etc. we have the real power here. Just stop working for these companies. There are jobs and good incomes to be had in other industries. Tell recruiters you are not interested in ad industry.
Somebody else will work for them though. While I'm not saying "so it might as well be you" (unless you're going undercover), "not doing it" might not be enough. Actively undermining their efforts may be necessary.
Actively undermining is good! Specifically fighting for new regulations and laws around this issue.
I think people in tech have an obligation around this. We understand it better than most people. It's our responsibility to explain it in ordinary terms and champion reigning it in.
Any company that offers “lead enrichment “ data has already crossed the privacy line. Google that term to see the companies involved if you care to know.
Open source doesn't fix much. Most ad tracking today happens via well understood open technology, and then adtech companies build profiles server side. Here's something very simple and fully opensourceable that would still provide lots of tracking capability:
* Get lots of sites to put an img tag that references your site, perhaps by paying them a tiny amount per visitor
* When you get a request, assign a cookie if there isn't one. Log the cookie and the referrer.
* Sort your logs by cookie. Each cookie represents someone's browsing history, and the more complete your distribution of pixels is the more complete your view is.
You can opt out of this client side by blocking the requests (adblocker) or by using a browser that blocks 3rd party cookies (ex: Safari) but open source doesn't do much here.
In the parent post's defense, open source would actually allow for looking at exactly what data is sent how. And hold the companies accountable. This would be dead easy, without the need for whistle-blowing, or reverse engineering
At the basic level, you have no idea what the system software is gathering from sensors, like the accelerometer, barometer, GPS, wifi signals, etc, how it is processed, and how often it is sent, and to whom. After you find out how that works, then the next logical step is to ask just what the heck the other side is doing with it.
Seems so. I also find it interesting that people have sudden moral realisations working in industries designed not to be morally respectful.
Ad industry is about convincing people to buy things they don't want using any trick available, the result here is entirely predictable and inevitable.
This became obvious a few years ago. I went to a startup pitch event (actually to get pointers on how to write my own) and was genuinely horrified at what all of the ad-tech guys were up to.
There is no reason to assume any moderation or morals here. The naked greed of surveillance capitalism is such that all of the precautions that might have looked like the province of tinfoil hat wearers are now justified.
As a citizen of the EU, I really want to find out what the advertising industry has on me. I once filed a GDPR Art. 5 request with Quantcast but forgot about it - they give you a link to S3 with a promise that there will be a ZIP containing data within 30 days.
What other hidden players are out there? I know next to nothing about the ad industry
IAB Europe is the industry group representing ad tech in Europe. Their member list will give you tons of organizations that makeup the internet ad ecosystem. Note these are not “hidden” they are the above board members. Shady operators likely have no interest in joining industry groups but if they want to be able to credibly sell to big advertisers they will be.
Hedge funds and intelligence agencies are fingerprinting all groups within our society by monitoring the responses of various target audiences to engineered stimuli. This comes in the form of observing the response to selective dissemination of content with a known effect on observers (you). A few years ago we heard about some outrage nobody did anything about when it became clear Facebook was manipulating reader emotional depression by showing some groups engineered posts. This has become a higher dimensional problem since then. It is no longer just "Facebook is making some groups of people sad." Now these orgs are reinforcing complex concepts in our minds subconsciously. One that comes to mind is a motif you will see in advertisements once you are more aware of it: strong healthy black man rescues white woman from a bumbling and pathetic white male. The whole point is to exploit fracture points in society and shift the overton window. Being able to establish behavioral patterns triggered by engineered information dissemination helps them control groups of people for their own benefit. It's a psychological intelligence operation on an unprecedented scale. Hedge funds stay ahead of new trends to invest in to maximize ROI. Intelligence agencies similarly identify threats to their power structures and steer the minds of the consumers of social media to further their agenda and steer the beliefs of any group in a controlled direction.
This is paranoid. I remember the story about Facebook doing emotional manipulation through the feed, but you go from there to hyper rich hedge funds and shadowy intelligence agencies at the reins of public life shaping the course of history through social media black magic. I don't doubt there are bad actors trying to do bad things via advertising and social media, but this is quite the claim. The rich and powerful are bad enough without adding internet/advertising mediated mind control to the mix.
It's weird that the story you picked was a black man saving a white woman to describe the shift of the Overton Window. Where is the window moving to, and from where?
I don’t know about you, but the Snowden Revelations dramatically shifted my baseline of “paranoid schizo”. Before, I thought the prospect utterly bananas that “the government” was tapping every phone call, strong-arming ISPs, and intercepting mail en route to its destination.
Then I found myself in a rabbit hole of Tuskegee experiments, Operation Northwoodses, MKULTRAs, human-animal hybrids, and so on.
Now, the question I ask is, “is this technologically possible?”
I have similar feelings, don't get me wrong. I too have experienced that shift. I guess I just want to know, is what technologically possible?
I see two possibilities here: one is a handful of unimaginably wealthy and powerful hedge funds and intelligence organizations (heretofore unnamed) which are somehow coordinated in their efforts to shift public opinion to... something. The example given is something about black men, white women, and white men. I'm not sure what that means.
The other, IMO far more believable scenario, is that there are many interested parties using social media and advertisement in general to change people's minds about a myriad of topics and issues, which is not remarkable except to the extent that new technology and new techniques are being used, which we may not fully understand or be aware of.
The first requires a belief in a conspiracy among the hyper rich and powerful to create some ill-defined new world, the other does not. Both are technologically possible, but I find one more convincing than the other.
Edited to add that last sentence and to correct some awkward wording.
If I had to guess, I’d say that most of these people are aligned in factions, most of which probably have roughly similar interests and so are working fully or partially independently towards roughly the same things, probably with no little amount of internicene jostling for position.
And in the process they’re probably producing technological horrors simply because it’s convenient and effective to do so. No hard feelings.
Conspiracies happen all the time, and I would imagine that few ever come to light.
I'm going to keep getting downvoted here, but it seems like people collect a bulleted list of spooky and bad things a government has done, and imply a meaningful connection via the act of simply listing the events together.
So, in our same list we have:
- A Cuban false flag operation with military intent.
- A CIA Mind Control experiment which is implied to be widespread, but documents don't actually support this. Torturing a few hapless individuals with LSD is certainly terrible, but it's different from "widespread government control."
- A terrible, scientific & racist medical experiment carried out against a vulnerable group of people.
- And then, "human-animal hybrids" ... I'm not sure how to respond to this? A few folks in China did some questionable things, maybe? I'm not sure what you're getting at here.
These are all disparate groups, with disparate intent, and unrelated outcome. Please, how are these related? Some people in power did some things are morally wrong? Why include conspiracy theories in this list if that's the only similarity? If you simply wanted to discuss government transgression, you could avoid something as trivial as MKULTA and instead simply mention U.S. Slavery, or the Soviet Famine, or any of the many government-run genocides in world history.
The idea is that all these things are related in their goal in to trying to control people either by using force or coercion. They were all done using the Mechanism of US gov. They also weren't prosecuted for their crimes.
The Tuskeegee experiment wasn't about control, it was about studying syphilis. It was as terrible was it was racist, but it had nothing to do with government control.
A false flag operation is only about control in the loosest sense. You can't successfully false flag your way into a military conflict with an allied country: it's only useful when tensions are high enough that there is already popular support for a military operation, but an excuse is needed. In that sense, other "excuses" for military conflicts work the same way. The spurious claim that Iraq had weapons of mass destruction was popular only because of 9/11 and sentiment about the middle east at the time. (a similar argument, for example, could not have been made about other countries with WMDs, such as France, Israel, and South Africa) I'm not suggesting any of these actions are acceptable, simply that the salient point here must be deception, not control. Broadly speaking, any government-driven policy or program in some vague sense of the word requires control. There must be enough popular support for any initiative such that it can be successful. Anti-smoking campaigns require coercion as well, and sometimes even deception. Smoking used to be wildly popular, but the government, as well as some powerful groups, were able to use emotional appeals and scare tactics to vastly change the public perception of smoking. Should we lump this effort in with MKULTRA?
I'm not sure that "human-animal hybrids" are real in any non-pedantic way. Even so, not sure what the control aspect is meant to be here.
MKULTRA, however, clearly was about control. The correct conclusion about MKULTRA, however, should be that the CIA needed more oversight, since they were apparently willing to torture people and perform highly unethical experiments to learn how effectively an individual could be controlled. Often glossed over is the fact these these experiments largely didn't work at all at all. No progress on "mass-control" was ever made, and instead, all we're left with is government abuse and torture in pursuit of something that probably isn't possible.
Are you familiar with Hannah Arendt's book, Eichmann in Jerusalem? The narrative is that the Nazi regime was comprised of completely normal people. She then examines what the implications are in a world where perfectly normal people perpetrate the Holocaust.
One addendum, it's worth keeping in mind the base rate for these awful gov't incidents you mention. If you only see the bad things, you have subconscious blinders (Kahneman's "What You See Is All There Is"). Try thinking about MKULTRA or Cuba or Tuskegee in relation to (much larger quantity and impact from) positive things the government does, like international aid, or emergency services provided by the Coast Guard. Responses to earthquakes in poor countries, our military is there. Vaccinations, food drops, all kinds of things that the gov't does that are good.
You have to consider these things holistically to get a good picture, otherwise I agree it's very easy to rabbit hole down a conspiracy path. And it's attractive -- it means "you get it" while other people dont -- but it's not an accurate representation of the whole, which makes it a bad model to allow to fester in your mind.
I think it's pretty clear that no group of humans is running the show, where it matters. Humanity is too large, with too much complexity, too many interconnections and Nth-order side effects, for any group of people to be running it.
We are, for better or worse, cruising on a giant ship with limited steering, and all we can do is try to control how much fuel we shovel into the engine.
But the direction of the boat is controlled more like an Ouiji board; our direction is an emergent behavior.
When I was a developer for a publisher, I could choose the race, gender, interests, location, and career of people to target my ads with. As long as my ads didn't explicitly break their TOSes, I was good to go. You don't have to be rich and powerful to gain influence over people. A modest budget will give you this access, because the rich and powerful will sell the access to you. They aren't even necessarily interested in using it directly, because there's more money to be made and less regulatory pressure if they're middlemen.
It is weird that he comments on that, and not that it's become the most advertised interracial coupling even tho IRL it's the 2nd least common interracial coupling?
It's weird that he (and you) comment on that because if someone is extremely concerned about the media promoting miscegenation, that's usually a good proxy for them being a racist cretin.
I simply pointed out a fact: there is a significant discrepancy between the rate of black male + white female couplings IRL and in the media. I am bringing no judgement on it.
Considering how frothy at the mouth you got over this fact being mentioned, you are no better than those 'racist cretins' you try to insinuate I am part of.
Do you actually have any valid arguments to dispute the claim, or only the attempts at shaming ad-hominems to try and suppress that claim?
To be frank, I don't think you've actually conducted an empirical comparison of "black male + white female couplings IRL and in the media", nor do I think such a discrepancy would be worthy of comment even if it exists, and perhaps most importantly, I do not believe in the innocent motives of anyone who brings up the subject and then acts like they are only attempting to make an innocuous anthropological observation.
I'm not interested in a 'debate' about race-mixing with you. I just want to point out that when you tell on yourself like this, both online and IRL, people notice, and we'll treat you accordingly.
In defense of the guy that brought it up: race-mixing is a huge scissor statement. It's already working here - you're getting your socks in a tizzy over a mention of it and can't get past the object-level. The meta-level is it doesn't matter what concept they use - race, gender, sex, identity - posters can and will use it to make you fight people you would otherwise get along with.
Now that the drama is over and I won't be accused of lying, I'd like to point out that I'm European and a child of a mixed marriage myself. So the reaction of this wannabe Jihadi John SJW is even more troubling.
As for the scissor statements, here's another good one: 'Women have smaller brains than men'. It is true (on average) but it will draw out mouth-frothing SJWs without fail.
Yet you keep commenting, while actually avoiding the answer. You keep going in circle, trying to insinuate things about me based on the topic, while working hard to pretend the topic is irrelevant. Do you even see the dichotomy in your actions?
> I'm not interested in a 'debate' about race-mixing with you. I just want to point out that when you tell on yourself like this, both online and IRL, people notice, and we'll treat you accordingly.
Ah yes, upgrade from ad hominem to attempts of silencing through threats. Keep going. You think people don't notice _that_?
There is a significant discrepancy between coupling rates of black male + white female IRL (2nd least common pairing) and in media.
I misread your question so my reply was unnecessary snarky.
Black male+white female is 2nd least common coupling in USA, this at least is a fact unrelated to my opinion.
Comparison to media rates is based on Hollywood blockbusters and Netflix lineup. While the numbers for Hollywood are lower than on Netflix, black male+white female is 2nd most common coupling in Netflix originals.
P.S. I am European and child of a mixed marriage myself, so this is no 'keep America white!' effort. It's just that the idea of 'forbidden truths' 1984-style rubs me the wrong way.
Where's the ad hom? I'm not implying anything about the commenter, and I clearly didn't state anything about them. I just wanted more information about why that specifically is an example of an attempt to move the Overton window. Given the premise of the comment--that immensely wealthy and powerful (but apparently nameless) organizations are shaping public opinion of the masses via social media and advertising--they would choose positive feelings about miscegenation or black men in general over, I don't know, foreign policy, consumerism, or expansive federal power. It's weird. I called it weird. I stand by it.
For crying out loud, the commenter didn't even establish with evidence that it really is a pattern, or what "their" goal could even be.
EDIT Just noticed your username. Congrats, you got people riled up.
Your post is also an attempt to “reinforce complex concepts in our minds subconsciously” isn’t it? The entire field of marketing is an attempt to influence people to engage in a particular behavior. Why single out hedge funds and intelligence agencies?
I mean, how large is the divide between large corporations and government agencies and the military really? PBSUCCESS was quite some time ago, but is there reason to believe that this has fundamentally changed?
They certainly won't touch somebody for posting a negative yelp review, but if significant business interests align, does anybody doubt that people get vanished?
Maybe not the black male part. But the "competent woman" and "bumbling helpless male" is an extremely common TV commercial trope once you're paying attention.
It's not really a commercial trope so much as a TV sitcom trope, going back at least as far as the Honeymooners.
It's also not as common as it used to be, as far as I can tell.
Edit: I've been thinking about it, and I don't think I can recall ever actually seeing "black male rescuing a white woman from a white male" being used in an advertisement, anywhere. Typically such a motif is used to present the black male as a threat, not as somehow heroically superior to his white counterpart.
Even then I can only recall its use in political advertisements (such as the infamous "Willie Horton" ad) and maybe very old movies.
> One that comes to mind is a motif you will see in advertisements once you are more aware of it: strong healthy black man rescues white woman from a bumbling and pathetic white male.
Ads are generally tailored to the user's preferences.
No comment on the race aspect of this, but advertisers, marketers, and salespeople have always benefited from making people feel unhappy. It's one reason so many people loathe them.
It's a very simple and common tactic you will see everywhere: Make the customer feel inadequate, then present the product as a solution to their inadequacy. You don't need to directly attack them as inadequate. You can simply present a scene, or situation and let the customer's own insecurities/lizard brain desires fill in the gaps to associate the product with a solution (e.g. a liquor commercial that shows a couple having a great time partying with friends).
I'm not buying a lot of this "motif" usage stuff, but if you want to speak entirely speculatively, I'll bite.
Marxism extensively argues that it is easiest to oppress people when you categorize them differently. An example would be how poor whites weakened the rights of free slaves in post Civil War America. Marx would argue the rich propagated racist beliefs to the poor whites who then limited the rights of other poor people. The end result is the poor limit their own democratic representation. Scientific Socialism is an essay by Engles (coauthor of the Commumist Manifesto) where he argues all oppression is based off of sexism (infantilization, dehumanization, objectification) and shows how the concepts of oppression are modified for race and class, so each group appears different but are actually the same. How this would apply to advertisers might be to preserve the status quo, or to make people feel bad to compulsively buy.
Not saying I agree with every aspect of those arguments, but it is a well documented attempt at answering your question.
I would start taking these kind of arguments more seriously if Amazon stopped sending me washing machine ads AFTER i already bought one on Amazon. No, i am not becoming a washing mashine aficianado.
You choose to believe the advertisments of advertising companies about their own power and efficacy. That these are dystopian to you is not relevant.
I see the line of "they showed me an ad for X after I bought one on their site!" thrown around a lot as a sign of incompetence, but the fact you made the purchase recently actually makes you a good target demographic.
You wanted some good and paid money for it. There's is now a real chance that the good you bought did not meet your needs, it broke it sucks, etc. If so, you're a prime target for these ads. If it broke you're probably looking for a new brand, or maybe it just isn't that great and you're considering returning the item and making a new purchase. While you may not need one you're part of a demographic that responds well to these ads relative the rest of the population.
lol yep! what's sad about this is that I totally believe that fb is doing pretty manipulative things - they have the biggest incentive in the world to get people to buy things - and that it is probably adjacent to the level of manipulation this person is talking about. but then instead of realizing that for-profit companies are bad, some alienated people start thinking "maybe race mixing is the problem" instead, project that onto ads, and that's how you get white nationalism.
Location data tells me how often you go to the gun shop and the shooting range, when your house is empty, which political protests you go to, how often you visit the pharmacy/doctor, what prostitutes you visit, what gay bars you visit, how often you break the speed limit, how much sleep you get, when you visit your lawyer or your STD clinic, whether you've kept up going to Alcoholics Anonymous, what restaurants you go to and how often, how often you're late for work, and how often you visit your grandma.
I'm not an Iranian gay gun owner, but I still have a problem with an advertising company assembling a database of Iranian gay gun owners.
I believe the ad tech industry would literally show alcohol ads to recovering alcoholics if A/B testing or 'machine learning' said they converted at higher rates.
I believe the ad tech industry does not protect medical and political data with the appropriate level of privacy protection. So even if they wouldn't sell their database of gay Iranians to Iran, I think they'd get hacked by them.
And I believe some companies probably would sell their gun ownership database to a government that wanted to arrest gun owners, and their gay bar customer database to a government that wanted to arrest gay people.
Or rather, would you rather have
useful advertisement or spammy
advertisement?
I run an adblocker, so you can guess my answer to that!
Ah, the old "I have nothing to hide". Just one of many obvious rebuttals: We need to be a large anonymous crowd to protect those among us most in need of anonymity: whistleblowers, activists, stalking victims, to name a few.
There are many reasonable proposals in the article but they are only wishful thinking: there is no way the USA passes a law similar to GDPR, no matter what people want and how many similar articles appear.
As for GDPR: at least in Germany it's problematic. Our system typically relies on competitors to enforce law abidance in companies (so called "Abmahnungen" based on the "Gesetz gegen den unlauteren Wettbewerb", UWG for short, a set of laws regarding unlawful competition). One court recently ruled that GDPR violations don't fall under those laws (https://www.datenschutzbeauftragter-info.de/landgericht-stut...).
That leaves us with:
- reporting violations to the officials. They are chronically understaffed, have little technical expertise and it takes months to years for them to act. They are very hesitant to hand out fines, but theoretically can.
- individual citizens suing a company to force them to abide by the law. This is rare because the citizen will have to cough up the money to go to court, and even if he wins, the company will only be forced to abide by the laws regarding this citizen, not in general.
- publicly shaming companies into compliance.
A higher court might have different opinions, and I very much hope they will, because GDPR quickly becomes meaningless without enforcement.
Edit: I have literally no idea why this is downvoted. Unless it's just because you personally don't like me, please leave a comment explaining what is incorrect.
Unless the site is only available to German users, you should be able to file a complain with any of EU member state regulators, and not all are so timid as that.
NOYB, the non-profit org founded by Max Schrems, has already been filing complaints with the French, Austrian, Belgian and German authorities: https://noyb.eu/
> Unless the site is only available to German users, you should be able to file a complain with any of EU member state regulators, and not all are so timid as that.
You can, but they will forward that to the applicable authority, which will be the local German one for German sites.
For very large companies that do business in many countries regulators have various concerns (power of the company relative to the country, jurisdiction shopping via choice of headquarters location, etc) that don't apply for the typical case of a German company with a German audience.
On filing complaints and being informed that they were forwarded to the local authority that has jurisdiction.
https://gdpr.eu/article-55-supervisory-authority-competence/ also states Each supervisory authority shall be competent for the performance of the tasks assigned to and the exercise of the powers conferred on it in accordance with this Regulation on the territory of its own Member State, and article 56 adds to that.
Well, we'll see what CCPA looks like by the end. If someone else makes a different law then a federal law will come so as to harmonize the requirements. If no one else makes a different law, CCPA will become a de facto federal law because you ignore California to your own detriment.
s.skimresources.com, sb.scorecardresearch.com, secure-cdn.mplxtms.com, z.moatads.com, ml314.com, secure-us.imrworldwide.com, www.googletagservices.com, www.google-analytics.com, www.dianomi.com, d1z2jf7jlzjs58.cloudfront.net, static.chartbeat.com, assets.adobetm.com, platform.twitter.com, www.queryly.com, cdn.polyfill.io, www.lightboxcdn.com, content.jwplatform.com, platform.instagram.com, www.inc.com, images.fastcompany.net, connect.facebook.com, cdn.conversant.mgr.consensu.org, cdnjs.cloudflare.com.
images.fastcompany.net is probably just used to serve static content from a cookieless domain and some of the others are CDNs for libraries, but the rest are all there for tracking purposes.
If you don't like what the ad industry is doing with people's personal data, please don't keep giving them ways to get to it.