Below is the google_gid for different publishers, there is no proof of overlap, they have different google_gid for same person. Which is exactly what google describes. [1]
This log [0], right? Did you miss in the article that it's the `google_push` identifier that's being used for syncing between adtech companies? If you search for it (AHNF13KKSmBxGD6oDK9GEw5O0kvgmFa3qM30zpNaKl72Og), you can see it being included in requests to lots of different adtech firms' domains.
Also another benefit of google_user_id vs Google's Ad Manager cookie is, it expires after 14 days. So after 14 days, you will get new google_user_id for same user. So syncing between adtech companies does not have much value.
You are implying that this mechanism is already used by adtech providers, we have no proof. Those players are often competitors that are not working together so they won’t share their user data (not a user identifier, only a “page load” identifier).
If they want to sync their user ids (because one is buying inventory from the other), they can launch a cookie sync between themselves (same process as with google_gid, persistant user identifier, much more efficient)
which is the "workaround" to the gdpr the article badly describes (probably because brave upcoming ad network will do the same but more workaroundily)
now those are used to match a 3rd party id. you just need a gdpr_workaround schema in your data base with two columns user-id, google-random-id with N-1 records indexed both ways.
gdpr has restrictions on pin pointing a single person. this is effectively doing that, but claim it is not, because random ids. apple is just a little better with how device-advertiser-id works.
Well, the “right” workaround is an opt-in system. But that would drastically reduce the number of qualified ad prospects, reducing their wholesale value, killing the online ad business, drying up the websites themselves who exist for this revenue (some/many of which are trash, but not nearly all).
I don’t think we can have it both ways, or at least it is very difficult and we don’t have a great compromise solution.
I know that I'm not interested in "compromising" with the ad-tech industry. They've been spending too much time and money attacking my defenses against their terrible practices for me to treat them as anything but an attacker.
Not that I support the ad-tech industry, but those consumers probably are interested in having their favorite websites being kept alive. Which implies that they might indeed be interested in "compromising" with ad-tech industry.
it isn't about giving ad-tech any chance of anything, it is about websites that are liked and used by people (most of whom have either no means or no desire to support those websites with money directly) being able to sustain themselves in order to exist.
If only there were other models for ad sales, say ones that were successfully used for decades prior to the advent of the internet and ubiquitous surveillance, that could be used instead of said ubiquitous surveillance...
But no. The internet enabled vast, invasive user tracking, therefore vast, invasive user tracking is the only conceivable way to sell advertising.
That's pretty much a false dichotomy: a site must either support itself via ads, or cease to exist.
There are other ways to get money to support your work, and if those ways are too painful right now, that's just an opportunity for disruption. Even better, it's an opportunity to prove that disruption doesn't have to be exploitative.
There are entire markets that cannot be accessed by publishers unless they subsidize content with ads. That is not a false dichotomy, that is a market requirement.
Not every website is the WSJ or Bloomberg, which cater to markets that are willing to pay for content.
Not saying that it has to be ads only. If there comes a disruptive alternative revenue model that allows all those websites to self-support themselves, I will be one of the first people to jump the ship and advocate for the ban of ads in favor of that new model.
To be honest, that doesn't matter to me. I think that websites who inflict the ad-slingers on their readers are showing great disrespect to and disregard for their readers.
> but those consumers probably are interested in having their favorite websites being kept alive.
I'm one of "those consumers" and I'm actively looking for sustainable ways to pay content producers.
Here's what I do currently:
- subscribe to two newspapers in addition to the mandatory payments to the national news broadcaster.
- donate to the Guardian
- buy on Blendle
If there was a way to pay for single pay-walled stories I would probably use it a few times a week in addition to my current subscriptions.
I'm not interested in any more subscriptions (unless they are all inclusive like Spotify so I can cancel my current subscriptions, and even then I'm not sure since I actually want to support those two papers and think I do so better through direct payments than through revenue sharing through a huge international tech company. )
More generally, if enough people (including the author) think the content has merit, they will choose to support it (by which I mean “collectively supply all the resources it needs to continue”).
The cost of running a basic website to publish text is modest. Tools like [dat][] and [scuttlebutt][] make it completely free (once you have a computer and any internet connection) to distribute content to people who actually want it.
On the other hand, if you want to make a living out of producing content (rather than wanting to publish the content purely for its merit), that is harder — the content has to be that much more valuable to enough people.
As long as individuals can publish stuff, and others can see it and choose whether to support it financially (all without 3rd parties mediating/filtering), then I'm content. Our distributed tools make that possible; we just need to make them easier and more ubiquitous.
> probably because brave upcoming ad network will do the same but more workaroundily
AFAICT Brave's plan is to send a block of potential ads to the client and use a client-side machine learning algorithm to choose specific ads. So the claim is that none of client events, inferences from the algo, nor ad choices travel from the client to the ad networks. (But ad networks retain their crazy microtargetting which I guess is the selling point.)
Even if Brave were to choose the blocks of ads based on geolocation and other install-time/runtime data which they then sell to third parties, it's still significantly less data leaking from the client's browser compared to, say, a default Chrome install. But them storing/selling that would be a clear GDPR violation as well as going directly against all their explicit public claims so far.
What is your understanding of Brave's upcoming ad network that leads you to believe it requires a surreptitious GDPR violation?
I'm an engineer who has worked on ad systems like this and I'm really struggling to make sense of this article - what hope does a layman have?
Here's my understanding: Google runs real-time bidding ad auctions by sending anonymized profiles to marketers, who bid on those impressions. The anonymous id used in each auction was the same for each bidder, which is in violation of GDPR. If Google were to send different ids for each bidder, it would be ok? Is this correct?
Why would it matter that the bidders are able to match up the IDs with each other, aren't they all receiving the same profile anyway? Wouldn't privacy advocates consider the sending of the profiles at all an issue?
This is a problem because companies can use this ID to correlate private user data, without anyone's knowledge or consent.
There are companies that specialise in sharing user information. Some of them work by only sharing data with companies that first share data with them (an exchange).
If you got this Google ID, and you had a few other pieces of information about the user, you could share that data with an exchange, indicating that the Google ID is a unique identifier. Then, the exchange would check if it has a matching profile, add the information you provided to that profile, and then return all of the information they have for that profile to you.
So, let's say you're an online retailer, and you have Google IDs for your customers. You probably have some useful and sensitive customer information, like names, emails, addresses, and purchase histories. In order to better target your ads, you could participate in one of these exchanges, so that you can use the information you receive to suggest products that are as relevant as possible to each customer.
To participate, you send all this sensitive information, along with a Google ID, and receive similar information from other retailers, online services, video games, banks, credit card providers, insurers, mortgage brokers, service providers, and more! And now you know what sort of vehicles your customers drive, how much they make, whether they're married, how many kids they have, which websites they browse, etc. So useful! And not only do you get all these juicy private details, but you've also shared your customers sensitive purchase history with anyone else who is connected to the exchange.
I have no doubt that if you had a record of my browsing habits for 2-3 days you could readily identify who I am the next time you have my browsing habits for that period of time.
I wouldn't be surprised at all if 2-3 hours of active browsing was enough for this.
It seems likely that the ad network could detect the change in ID if the expiration happens in the middle of a browsing session. Which, considering user habits, they are probably online at the same time every day, or have habits that cycle weekly.
Also, considering we largely do the same things every week and every day, I suspect a single day to give you at least 50% of a user's identifying data, and a week to give you at least 80%. That leaves a whole week of pretty accurate tracking.
I think you've made a pretty wild claim that 14 days isn't enough time to build a useful profile. Regardless, even if the usefulness of the data over two weeks is questionable, it's still illegal to share the data in this way. You wouldn't be too happy if someone broke into your house and "only" stole a single fork.
Considering how much time many people spend online, and how efficient these profiling systems have become, I wouldn't be surprised if 14 days was plenty of time.
The time of validity and how hard it might be to build a profile are not factors in whether or not this is legal under GDPR. Here's the actual text from GDPR on pseudonyms and synthetic keys of this type[1]
> The principles of data protection should apply to any information concerning an identified or identifiable natural person. Personal data which have undergone pseudonymisation, which could be attributed to a natural person by the use of additional information should be considered to be information on an identifiable natural person
So PII that has been pseudonymized (mapped to a gid in this case) is protected in exactly the same way as if it had not been if the pseudonymized data could be mapped to a natural person by the use of additional data. The pseudonym (gid) is itself also considered PII under gdpr.
[1] https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=CEL...
> The pseudonym (gid) is itself considered PII under GDPR.
I know of multiple systems that use a UID but throw away a user’s information, including the UID mapping, when the user leaves. This allows historic metrics to be retained without ever identifying a user who isn’t still using the system.
Thank you: that explanation is the first that makes sense to me.
I get the impression that this structure would require an exchange: retailers would not trust each other otherwise.
Wouldn’t commercial pamphlets, interviews with salespeople, etc., from the exchange be obvious proof of illegal behaviour there? Google’s implementation is imperfect but, for the loophole to work, it would need coordination between several competitors and third party with a business model explicitly and almost exclusively about going around against GDPR.
If I can risk a comparison, that would be Google is like a chemical company selling fertilizer, and the exchange is selling bombs made from raw material bought by other people.
Am I missing the point? Shouldn’t this article be about those exchange and their clients, not Google?
> Why would it matter that the bidders are able to match up the IDs with each other, aren't they all receiving the same profile anyway?
I would guess that yes, they're all receiving – _from Google_ – the "same profile" but they also are collecting additional info that they can then share with each other and, because they can match profiles exactly, they can access each other's info about specific people.
> Wouldn't privacy advocates consider the sending of the profiles at all an issue?
I'd imagine that the profile Google has and shares is by itself fairly anodyne, but I could be (very) wrong about that. The problem seems to be more (if not entirely) that different advertisers can share info using a common profile ID.
I'd imagine that even a single advertiser would be able to perform a similar 'attack' by, e.g. running multiple different campaigns, but I may be misunderstanding exactly what info is being shared. It's possible advertisers are able to match the Google profiles to specific unique identities and thus are sharing much more than just the info they're collecting directly from their ads.
I'd imagine they are responsible too, not just alone, and that Google is a much more attractive target for GDPR enforcement both because they're larger, have more money, are more visible, but also because they're directly facilitating the "different advertisers" sharing that info.
If Google ceases to provide them the means of readily sharing info then all of those entities will no longer be violating the GDPR, in the scenario anyways.
Are they maybe only receiving a partial profile, with info relevant to that ad buy? And by compiling that data with the unique identifier, they can match it with other partial data from other ad buys?
I'm glad this story was reported, and I'm thankful to the author for putting in the work required to report this story. But after the first five paragraphs, the author's shameless, repetitive self-promotion and insistence on referring to himself in the third person almost made this unreadable.
The headline was enough to pique my curiosity to explore Brave's product offering. Unfortunately, actually reading the article had the exact opposite affect.
I thought the exact same thing after reading the first few paragraphs but didn't even notice that the author IS Johnny Ryan, the person mentioned in the story, until you pointed it out.
I didn't make it to the end, closed the tab and went over to HN comments for a summary.
The only thing Google did in regards to GPRR was limit the number of parties in RTB they're including by default for syncing to a "trusted set" of parties.
I think the silent/invisible nature of cookie sync'ing is what upsets people when they discover it. T
he diagrams in your link show a single hop for the 302, in my experience that can be many hops going between different advertisers. The same thing happens on non-google platforms, like TradeDesk and others.
The sync scenario can make it next to impossible to delete cookies when those cookies can be rebuilt using data from others.
I think the HN community, and most consumers, tend to look at things from only one angle. Imagine you start work at some small shop that manufacturers widgets for consumers. What would you do when you have to advertise your product? You'd have to turn to Google is a similar company.
Are there any real alternatives? (I am asking because I really want to know)
I say this because I am in this position now. I have to figure out how to advertise my company's products and am torn on how to go about it.
The alternative is to spend hundreds of hours finding widget-related websites, trying to contact the owner(s), negotiating what ad spots are available, what ads are acceptable to run, and what pricing/terms will work for both parties, then managing that relationship over time to ensure ads are actually being displayed, being paid on time, contracts renewed, etc.
It's definitely possible, but you're just doing everything manually that ad networks do for you. Whether that is worth your time (or worth it to hire someone to do this kind of thing for you...) is up to you.
At which point you'll very likely learn that a lot of widget-related websites use ad networks because it saves the 2-3 administrators involved a lot of time and energy.
It's definitely possible for small websites to do ads directly, but it's a lot of work. Often more than is justified for a few thousand dollars a year in ads.
Yeah and there's a reason the tech moved on from that. It was a LOT of work on both ends to negotiate and monitor the relationship. Instead now we have a central broker who both parties work with that has set up a computerized way to manage these relationships.
Personally I think the solution that lets us keep ad supported content and easy ad placement would be for Google to force companies to provide bots they could run internally so the profiles never leave Google's datacenters and strictly monitor the output so the buyer bots don't leak information back to the companies. I think that would do a lot to alleviate the privacy concerns and breaches and is honestly how I though ads were being sold for the longest time instead of profiles being sent to companies buying placement.
> Yeah and there's a reason the tech moved on from that. It was a LOT of work on both ends to negotiate and monitor the relationship. Instead now we have a central broker who both parties work with that has set up a computerized way to manage these relationships.
I'm not disputing the necessity of a central broker. Contexual ads based on search keywords or website content used to work fine without surveillance, and can perfectly be automated by a central broker.
Years ago, I didn't have much issue with online ads (with the exception of popups and spam emails). Nowadays, I'm forced to block them altogether to avoid the extensive surveillance by adtech. It doesn't have to be this way if adtech respected user privacy.
> I think the solution that lets us keep ad supported content and easy ad placement would be for Google to force companies to provide bots they could run internally so the profiles never leave Google's datacenters
Honestly, that wouldn't do much to alleviate my privacy concerns, as it does nothing to protect my privacy from the likes of Google (or other ad-slingers).
Native ads like those can certainly be automated to a great degree, and at the very least use a self-serve interface. "Advertise with us!" links in the sidebar or footer or wherever.
AdSense used to do this for you before it tracked individuals...
You can still target adverts at content on a site, and have an aggregate system to make that easy and granular.
There could even be services which collected lists of websites and categorised them, and the rough demographics of their audience, and retailed slots.
Individual tracking is unnecessary financially, you can make as much revenue without. See example of the New York Time dropping tacking for EU readers for GDPR reasons, and continuing to grow ad revenue: https://digiday.com/media/gumgumtest-new-york-times-gdpr-cut...
You advertise on a website for widget fans. That's how it successfully worked for a long time.
The whole targeted advertising is to allow adtech companies to identify users of the "widget fans" site, and then advertise to the same people, but on another, cheaper site.
Targeting existed long before computers. The difference today is the massive amount of very personal data being collected on people without their knowledge or consent and the risks it puts on them.
Imagine if you went back to say the 1920s, and you told a marketing director that you could install a one-way mirror room and back door into every household in the country, and staff them all with analysts who will secretly observe people in their homes 24 hours a day, 365 days a year. People would think you were crazy. In fact, I bet most people today would be completely against that idea, yet wouldn't realize that what companies like Google and Amazon are doing today is effectively the same thing, except _much more_ invasive.
For actual, physical widgets the traditional advertising markets still work: trade magazines and trade shows. Contacting vendors who specialize in your market of interest and running promotions with them also works.
However, it's all a whole lot more expensive and effortful than running some Google ads.
I think the point that's missing from the discussion is that adtech companies should be targeting solely on placement (i.e. what ads should show up on a particular website based on the content of the website), and not select individuals (i.e. based on user interests regardless of the place on the web an ad is shown), when faced with this kind of legislation.
This is exactly how marketing worked before and how people can go about it now, as you describe, through traditional advertising markets.
I'd personally prefer it that way in general, but legislation is necessary for them to optimize on those restrictions.
And there lies the problem. The cost in both time and money is much higher. I'd love to run TV and magazine ads on all the shows and in all the magazines my customers are reading, it just costs a whole lot more. Way more than we have just starting out.
I've started a few successful businesses, and all I can say is what's worked for me.
I've never needed to turn to Google or other ad-slingers. Instead, I've done things the old-fashioned way, by going to where my potential customers tend to congregate and engaging with them (this kickstarts word-of-mouth, which is still the best advertising you can get), hosting my own online forums for customers, going to trade shows as appropriate, and supplementing everything with a few direct-placement ads in carefully selected media.
Yes, it's more work -- what Google et. al. are actually selling you is convenience, after all. But the rewards in terms of of ROI as well as fostering a real community, complete with evangelists, are more than worth it to me.
Trade "print" media with an online presence. Online media that doesn't sell all their pixels to Google? Radio, depending on your audience and required reach? Forums specific to your audience that don't sell all their pixels to Google. Podcasts. Submarine articles in the trades? Open source ad networks that don't embed insanity or real-time bidding?
Mostly, I would try to target your initial audience as precisely as possible where they live, rather than with a wide net. Perhaps a Google search returns results for top websites dealing with your product - if they are not vendors, then perhaps you advertise on that site?
Disclaimer: I'm not a growth hacker, but I've thought about these things and run a couple of poor Facebook campaigns for a brick and mortar business.
Look for genuine, verified success marketing stories for people in similar positions and follow similar strategies.
I've personally never heard of a success story for what you describe that involves paying google. But maybe they exist and they're just keeping it quiet?
> The evidence further reveals that Google allowed [...]
> Google has no control over what happens to these data once broadcast [...]
Is it possible that Google does have "control" over the data after broadcast, albeit legal control via contracts with advertisers (as opposed to technical control)?
Perhaps Google's GDPR compliance strategy relies on the participating advertisers to comply with their contract with Google. If that assumption is accurate, perhaps Google's advertisers are in breach of their contract with Google which makes it appear as though Google itself is in breach?
I could be off-base, the details in the article aren't incredibly clear to me.
(For the record, I don't like Google's business model and I don't like Google's pervasive tracking -- I'm playing devil's advocate to better understand the issue)
The real time bidding on ad placements seems like a thing that a user could never give consent to as it's literally feeding your info to a massive ever churning list of companies that get to bid on it.
Aka - you land on a site, it send your IP and whatever identifiers it has to 10,000+ companies who all then figure out if they want to bid on showing you an ad.
Do you have to give consent for each individual third party your data gets shared with? I’d thought that if you give consent for some purpose, the company can use whatever processors it wants as long as it ensures they protect your privacy.
IANAL, but I have spent a lot of time reading the GDPR and associated guidance as the DPO for my small company.
As I understand it, you're correct. The Data Controller (Google) is responsible for getting consent, and the Data Processors (the third parties in this case) don't have to get consent themselves.
However, assuming Google's legal basis for processing your personal data is based on consent (rather than fulfillment of a contract or one of the other legal bases), then Google is required to get your unambiguous, opt-in, and non-coerced consent for each specific way your personal data will be used.
It seems likely that Google is covering themselves by acting as a Data Processor, not Data Controller, and the web site using Google is the actual Data Controller. In that case, the web site, not Google, is the one responsible for getting consent.
I seem to recall (correct me if I'm wrong) that European courts ruled that “agreeing” to a very-long EULA for desktop software didn't constitute informed consent, because it's trivial to demonstrate that the users didn't actually read the entire agreement — even if they scrolled to the end, it's unreasonable to believe that most people read 10,000 words in 15 seconds.
So I assume that eventually these performances of consent-gathering will be legally judged meaningless.
> Is it possible that Google does have "control" over the data after broadcast, albeit legal control via contracts with advertisers (as opposed to technical control)?
That distinction is important. I’m happy that privacy advocates are realising that platforms having access to enforceable contracts is key–too many are quick to paint platforms as the source of the problem and not an agent who, with the right tools, could organise a market.
Facebook has suggested having an independent court what is acceptable content in their platform. Google could think about delegating the control of how its IDs are used to an independent entity with the power to audit its partners’ data practice properly.
The article doesn’t appear to claim that the author has been tracked in violation of the GDPR, only that the described mechanism makes it technologically feasible to do so.
Indeed. I fully admit I’d need to sit down and diagram some of Brave’s claims, but the large identifier screams “cryptographic entropy” to me.
The GDPR has separate rules that effectively deal with whether you are the business with which the customer works or one of their contractors. I could imagine a world where google is the second party and needs a secure feature like this so the first party can perform (consented) tracking across multiple domains they own. This is just a devils advocate argument since I can’t guess intent.
That is the GDPR-default though. You're still allowed to give data to third parties, you just need to have contracts with them regarding the handling, deletion etc of that data. Of course, mostly that's for data processors, which I don't think ad networks working with Google would fall under.
No, a data processor is any entity that collected personal data gets passed on to and where it is processed as part of the business arrangement. An ad network that receives personal data is definitely a data processor.
> No, a data processor is any entity that collected personal data gets passed on to and where it is processed as part of the business arrangement.
I'm relatively sure that there's another part: it's data processing for the client (here: Google) and the data cannot be used for other purposes. In this case, they don't process data for Google, they process it in cooperation with Google for the ad-buyers. Google also doesn't name them as data processors (which it would have to if that were their relationship).
If some contract was all it took, what would stop hospitals from selling patient info to insurance companies, saying "hey, they are processing the data, and we have a business agreement, this is all fine".
The 'client' in this case is the individual web sites that use Google's ad solutions -- not Google itself. It's the same thing with businesses' Facebook pages -- Facebook simply acts as a data processor in that case.
This is why nothing will be done about this. Sure, we might see a few smaller businesses fined, but the vast majority of sites using Google for ads will simply slip under the radar, while Google simply puts the blame on the site owners.
You are misreading the GDPR badly. A data controller can only pass on PII to a data processor. That is, any entity receiving PII from a data comtroller automatically is assigned this role by law. There are no alternative roles that could assumed instead. This means that a data processor must obey the rules laid out for it by the GDPR or it is in violation.
Oh, okay, I believe I understand your point and understand the misunderstanding. My point is that you can't just make everybody a data processor by signing a contract, share PII with them and be compliant (i.e. hospital sharing data with insurance companies). You're saying that by sharing PII with them, you're making them a data processor, but that says nothing about whether the DC or the DP are compliant.
I discussed GDPR with you before and based on your answer here and maxidorius answer to you I will not accept anything you say about the GDPR unless it is obvious or has references I can verify.
In the US, HIPAA would apply to individually identifiable health information. HIPAA Providers share information with other HIPAA-covered entities all the time under contracts where the associate entities (non-providers) agree to comply with HIPAA privacy rules.
Those are generally with patient's previous consent though, right? Things get a lot easier if you have somebody sign some documents before you start working on them.
Probably, I'm somewhat of a fundamentalist pragmatist ("this cannot be legal!" - "everybody does it, judges say it's okay" - "oh, I guess it's legal then :("), but in this case I'm not so sure. I still believe that Google does not consider them data processors (possibly because they don't consider a google_push id PII), because if they did, they'd have to name them in their privacy terms as entities they share data with. They don't. Of course, this might be because they don't care, but since it's a delicate issue and the stakes are somewhat high already, that doesn't sound plausible to me.
Pretty much all examples for data processing I've read are similar in this regard: the data controller (DC) passes data to the data processor (DP) so the DP can perform a specific task for them (handle invoicing, do analytics, run a web server, mail packages etc). The DP must not use the data for anything else, must not share the data with anyone (except for sub-processing, which has strict rules, too). "Exchanging/Syncing PII of users so we can create better profiles, more efficiently track them and show ads to them that are more personalized" doesn't fit the bill at all from what I understand. Similarly, landlords cannot get together and share all the data on their tenants to figure out who was a pleasant renter and who sued because the heater broke in winter.
So, in my understanding, even if you and I used the same invoicing provider, they wouldn't be allowed to tell me if they've invoiced a certain person for you previously, because we're different entities using them as a data processor and our data is to be kept separate. If we wanted to do data sharing (or even share aggregate probabilities like credit check agencies), we'd need a different construct, explicit consent and a bunch of additional compliance requirements.
Do they have to prove that the RTB ID can be used to retrieve PII? Or only that the RTB ID is correlated with personally protected information?
Is it enough that a RTB ID is pseudo-anonymous? (it always identifies the same person, but cannot be used to find that person's real information) - OR - is a RTB ID not even pseudo-anonymous?
A person is identified, if the ID references only one user in the whole dataset[1]. This also makes any information linked to the ID PII.
the ID would be pseudo-anonymous if one would need some extra data, to which they don't have access to, for linking the ID to one specific user in the whole dataset[2].
So to answer your question, RTB ID is not pseudo-anonymous as it only references a single user out of all of them.
[1] It's also important to understand the definition of PII in GDPR context. Which is any data that relates to an identified or identifiable person. Identifiable is the same as distinguishable. Knowing this helps to understand where the line is. https://www.lexico.com/en/definition/identifiable
(5) ‘pseudonymisation’ means the processing of personal data in such a manner that the personal data can no longer be attributed to a specific data subject without the use of additional information, provided that such additional information is kept separately and is subject to technical and organisational measures to ensure that the personal data are not attributed to an identified or identifiable natural person;
This some great work on tracking down all of these measures to track users. I really hope we get to the point where dumb ads rules the web once more. Hopefully this results in more than a slap on the wrist, but I doubt it.
Why should ads rule the web at all? Surely the cleverest engineers to walk the planet can come up with a new way of making money that doesn’t involve psychological manipulation.
> Surely the cleverest engineers to walk the planet can come up with a new way of making money that doesn’t involve psychological manipulation.
If they could, they would've already done so.
One of the things "the cleverest engineers to walk the planet" would probably need to do is to increase consumers willingness to pay for good content by a factor of ~10 for e.g. online newspapers with quality journalism to be profitable, which frankly sounds near-impossible.
> One of the things "the cleverest engineers to walk the planet" would probably need to do is to increase consumers willingness to pay for good content by a factor of ~10 for e.g. online newspapers with quality journalism to be profitable, which frankly sounds near-impossible.
After more than two decades newspapers still haven't figured out that even though I want to pay for good journalism I cannot subscribe to every newspaper there is, and I am a user who actively wants to pay.
I already voluntarily pay for two newspapers and involuntarily pay for the national news-and-a-little-propaganda service. Oh, and I donate to the Guardian half the time I visit them.
If more papers allowed me to pay per view I would likely spend more money on journalism.
But I'm not going to have another subscription right now, thanks.
Not that I think their proposition is better but the Brave people particularly are trying to push a different model with their attention token scheme, so it's not that no one can think of something different, just that it's enormously hard to get people on board when the old advertisers are holding on to everyone using every single way at their disposal, legal or not.
Brave is trying to be the middleman and launching their own ad network. I think browsers forcing a business model onto publishers still isn't the right answer.
I disagree, the technical issues are relatively easy to solve, assuming there’s enough budget, buy-in. The issues with these forms of targeting are structural/cultural and AdTech is a surprisingly slow moving ship.
I think there are simply engineers that are fine with the current state of things. You mention specifically "come up with a new way of making money"; however, for extrinsically motivated people, why reinvent the wheel? Problem solving can mean thinking up a solution or implementing a solution.
In the same vein, there may be engineers that enjoy working on this type of problem - how to identify someone that is actively avoiding you. The current iteration of Do Not Track mentality only makes the problem more interesting by putting up restrictions.
I agree with you in theory, but until we can figure out micro-transactional payments that work globally it seems ads are good stepping stone. People want to get paid for work, some users are willing to pay (cash) some with attention to ads. We should not give up our privacy or anonymity for this attention though.
Micro-transaction payments are probably a long way away, for non-tech reasons. Briefly, you might have to deal with collecting and remitting sales taxes or VAT in any jurisdiction in which you have paying readers.
Until there is some sort of agreement among the relevant jurisdictions to greatly reduce the pain of this, direct micro-transactions with your site's visitors are likely to be a bureaucratic nightmare.
Sad that Brave did not do their work correctly, the google_push parameter they are talking about is not an identifier. Otherwise it’s true that RTB should not exist and violate GDPR, but it’s so complex that even Brave was not able to correctly state the workflow.
“Starting in mid-April, we will begin assigning a URL-safe string value to the google_push parameter in our pixel match requests and we will expect that same URL-safe string to be returned in the google_push parameter you set. This change will help us with our latency troubleshooting efforts and improve our pixel match efficiency.”
Okay, but the `google_push` parameter seems to be the same for all adtech providers swarming on the same user in the same RTB session. Nothing in your comment contradicts the claim that this allows them to sync up profiles for that user across providers, in the way that the switch to per-provider `google_gid` values supposedly blocks.
Sure, but as long as the adtech providers each have their own stable IDs for you, they can still use `google_push` to link their corresponding stable IDs together, uniquely identify you, and merge their respective profiles.
====
Page View #1:
- Acorp: google_gid=qwerty, google_push=foo
- Bcorp: google_gid=asdfgh, google_push=foo
- Ccorp: google_gid=zxcvbn, google_push=foo
By exchanging their `google_gid` values corresponding to the page load with shared `google_push` value foo, Acorp, Bcorp, and Ccorp can identify you as user qwerty-asdfgh-zxcvbn.
====
Page View #2:
- Acorp: google_gid=qwerty, google_push=bar
- Bcorp: google_gid=asdfgh, google_push=bar
- Ccorp: google_gid=zxcvbn, google_push=bar
By exchanging their `google_gid` values corresponding to the page load with shared `google_push` value bar, Acorp, Bcorp, and Ccorp can still identify you as user qwerty-asdfgh-zxcvbn, even though the `google_push` value has changed.
I now see your point, thanks.
I was thinking this “google_push” is probably not unique (a.k.a many users could have the same) but the adtech providers could check the ids + timestamps to help with the match. NB: Google is not syncing with everyone on the same page view so the adtech providers have to be lucky enough to be synced on the same page view. Another question is: what is the “google_push” entropy?
Having worked in adtech, I can tell you the adtech providers probably don’t do that, for those reasons:
1) those adtech providers are usually competitors
2) if they work together, they can already sync their user ids directly together (so using google id is not necessary).
So I don’t think Google intentions were malign here on this particular point (contrary to Brave communication and all the press coverage).
But yes, Google shouldn’t add entropy by sending the same “page view id” to different adtech providers. Note that Google is “better” than the others here: every other adtech providers send the same user id to each partner (persistant identifier, not session or page view like google). And those providers are sometimes quite big: for example, AppNexus or Criteo trackers are also everywhere on the web. Overall, it’s the RTB system with all those cookie syncs that shouldn’t exist, and except for the “google_push” argument, Brave study is quite good (they are just explaining how the adtech world works).
(5) ‘pseudonymisation’ means the processing of personal data in such a manner that the personal data can no longer be attributed to a specific data subject without the use of additional information, provided that such additional information is kept separately and is subject to technical and organisational measures to ensure that the personal data are not attributed to an identified or identifiable natural person.
can somebody explain in simple terms what Brave is actually accusing Google of doing? The article seems to be written in a way that matches the language of the GDPR legistlation, instead of language actually meant to be read by people, and i can't figure out what the "workaround" actually is.
> Google claims to prevent the many companies ... from combining their profiles about those visitors
> Brave’s new evidence reveals that Google allowed not only one additional party, but many, to match with Google identifiers. The evidence further reveals that Google allowed multiple parties to match their identifiers for the data subject with each other.
BTW, many comments in here seem quick to agree w/this headline given how buried the details are. If someone has better detail, please share it.
Essentially, Google assigns an anonymized identifier to a user and sends that to prospective ad buyers. The idea is that the ad buyer can use this to target ads to people who have visited their site as they browse other areas of the internet participating in Google's auction. This is called remarketing.
An example. You go to footlocker.com and put a pair of sneakers in your shopping cart but decide not to buy. When you go read an article on the New York Times site, a potential advertiser recognizes your anonymized id and bids to serve you an ad for the sneakers.
The issue Brave is raising is that the same anonymized id is served to each potential ad buyer. This isn't an issue with data Google collects or exposes, but Brave states that buyers could theoretically collude to build profiles by sharing the data collected on their own sites with each other joining by Google's identifier. There is no evidence of this actually happening and Google's contract with ad buyers specifically prohibits this activity.
> essentially, Google assigns an anonymized identifier to a user and sends that to prospective ad buyers.
If it's anonymized then how could they send targeted ads to you? I think you're using a slightly different version of the word anonymous.
How I use the word anonymous it means, roughly speaking, that it can't be traced back to you. Or in this context, google wouldn't be selling anonymized data to third parties who in turn could contact you.
If they were selling data like X persons like product Y more then Z, there would be less of an uproar about this.
> Fuchsia’s engineers wanted to create a secure platform, but the advertising team, at the time, believed that privacy “goes against everything [they] stood for.”
brave is incentivized to push this narrative, accurate or inaccurate as it may be. i am not ad-tech guru, nor digital marketer. i do know that brave's entire premise hangs on traditional ad-tech strategy remaining static, consumer sentiment around "big tech" to sour and a groundswell of "privacy focused consumers" to materialize. that groundswell is their identified target market for their product.
Which is the reason Brave is in a good position to do this kind of work. They represent a growing portion of web users, and their research helps to give these users a voice.
What's funnier is that Brave """product""" is nothing more but a theme over Chrome that any 12 yo kid can do in 2 hours, an adblocker based on FOSS blacklists and some compilation flags that prevent Google from enabling its own server features and tracking system and redirecting the tracking system to their own servers. Yet their entire PR and marketing is based on "Google is evil!". In any other industry this scam would have been shut down and the management would have been sued to probably jail time. But in tech, many things are blurry.
I also see how Brave likes to thrive on anti-Google pro-privacy camp and I personally pick Firefox over Brave any day if the week.
There is de-Googled Chromium OS project, but Brave takes a few steps sideways by making further changes such as proxying location services, safe browsing API, etc. I doubt a 12 y/o could compile it though, let alone in 2 hours.
EDIT: since everyone seems to be mentioning the 4% rule, I'd just like to point out that I'm not denying the existence of this, just denying that it is actually effective. Google has violated antitrust before, and walked away with a "big" fine that's a slap on the wrist. They've violated GDPR before as well once or twice, and got a "record breaking" 57MM$ fine. The 4% rule exists and clearly isn't enforced well. I know a lot of people love GDPR but I would be beyond shocked if the EU actually managed to hit Google with something that sticks. I very much hope I'm proved wrong!
This sort of resolution was inevitable.
I said it before and I'll say it again: GDPR is an annoying measure for developers, small businesses and startups. It doesn't do much other than put in place so many steps that growth tools for startups become risky to use. For big businesses that (ab)use big data, it's not much of a hassle because they can afford the legal steps as well as the change in infrastructure. They can even work around it and keep abusing data without consequences.
If they're able to beat Google's lawyer army and actually prosecute them, then Google will take a whopping fine in the millions of dollars that'll be more than covered by their daily revs.
The European Union has decided that growth based on clandestine tracking of users, selling their PII without consent is not a legitimate growth tool.
You know, like the way we outlawed violence as a "growth tool"
Your other claims are more reasonable. But they would lead me to the conclusion we need bigger fines on bigger businesses. Not absolutely bigger, as the law already does, but relatively bigger.
The more power you have to break the law, the bigger the stakes should be.
> Your other claims are more reasonable. But they would lead me to the conclusion we need bigger fines on bigger businesses. Not absolutely bigger, as the law already does, but relatively bigger. The more power you have to break the law, the bigger the stakes should be.
GDPR penalties are a flat fee or a percentage of revenue, whichever is higher.
If Google is truly willfully violating the GDPR, the maximum penalty by law could be up to 4% of their global turnover. I would not call that pocket change. But more importantly, it is a relative increase in fine based on the law breaking company.
(Will the EU actually fine Google ~6 billion dollars? Perhaps we will find out!)
Given that "European Commision fines" is its own bullet point under "Costs and Expenses" in Alphabet's latest quaterly report, that view sounds about right.
The GDPR does consider willful violations and a pattern of behavior.
4% the first time might be something you can shrug off, especially for a company the size of Google. But if you continue breaking the law and give the regulators an easy second or third bite at the apple...
I’d expect Google makes no changes and fights the regulators the first few times.
E: I used to have a comment here about Google continuing their current practices against non-EU people but it appears from my reading of the GDPR that may not be so simple
Unless I'm misunderstanding what you mean by absolute and relative, I think the law is already relative:
> The maximum fine under the GDPR is up to 4% of annual global turnover or €20 million – whichever is greater – for organisations that infringe its requirements.
In context, "relatively bigger" would mean something like a progressive tax bracket. $20MM up to $500MM rev, 4% up to $1BB rev, 5% up to $2BB rev, 6% up to $5BB rev, etc...
A straight 4% would be absolutely bigger, but relatively the same (once beyond $500M).
This is a good idea. However what about phantom businesses which commit the crime and would not have real revenue.
My problem is with fines that they don't really force the PEOPLE in the businesses to play fair.
What about required & [RESPONSIBLE] roles and jail?
I am asking that in general because I am fed up with our business-entities made world where committing a crime is basically RECOMMENDED if the numbers and percentages say so.
I think it's not about "income brackets", it is about profit margins which can vary a lot between industries. 4% of revenue is enough to bankrupt traditional business, like Wallmart with profit margins of 2,48%. Google is a low-cost business, with profit margin of 25%, so even the maximum GDPR fine is something they can just write off.
So just make the fine a portion of profit?
Maybe a three layer system that takes into account flat euro rate, a percent of revenue, or a bigger percent of profit; whichever is highest.
Making fine a percentage of profit would be even worse: Amazon, for example has no taxable profit at all, so the GDPR fine for them would be $0 (or $20M, which does not make much difference). And having different fines for different industries, based on gross profit margings could be viewed as discriminatory, and therefore ruled illegal.
Though I think the GDPR is bad law in some ways (chiefly in terms of the chilling effect on small operators), I think that allowing the cap on the fine to be revenue based (and specifically global revenue based) is nearly essential.
Otherwise, you get into accounting chicanery (or outright loss-making companies being able to operate with impunity while they grow).
There's nothing stopping the enforcement action to take into account the underlying profitability if something like a grocery store were to run afoul of GDPR.
The fine is already kinda big for GDPR (4% global rev for big companies) but Google has gotten away with way worse on way more regulated fronts i.e. their antitrust case which slapped them on the wrist.
If GDPR wants to be effective they need a ridiculous company-breaking fine for big abusers like COPPA and the like. Something like a per-user fine for violations that means they either can't do business in the EU or they have to become compliant otherwise risk being basically destroyed (for reference, violating child protection laws in the US can break and have broken companies before).
EDIT: also as for the tracking growth comment -- I agree on this, but the effects of GDPR on growth tools reach far beyond this. Even basic metrics are hard to get without a bunch of hoops. Even if you store no data you have to have a bunch of checkboxes and banners everywhere. Just for using Google Analytics that just relies on what they know about your computer (fingerprinting), and uses no PII at all, you need a banner and privacy policy. The laws are making it hard to use even basic analytics without fearing a misstep.
>Google has gotten away with way worse on way more regulated fronts i.e. their antitrust case which slapped them on the wrist.
Although I feel that Google has won its position in search, etc by offering a legitimately better product, we need to punish companies that continue to break the law. Success does not put you above the law. If you use your massive profits to absorb fines for breaking the law repeatedly, you should lose your ability to operate as a corporation and be dissolved. We need to reform antitrust laws. It's not just being able to control an entire market anymore, it's about being able to ignore international law because you have so much money.
IMO, punishing data abusers can't be solved with data privacy laws. It's a regulation with very little ability to be enforced.
In the first place to abuse data you need a lot of it, so really we're looking mostly at big companies with big pockets. The easiest way to attack here is to punish unfair markets and to have stronger antitrust laws.
Let's look at the textbook case for privacy with Facebook. Facebook would be the prime candidate for antitrust no matter how you look at it: there are basically no competitors to it in the US social media market. They own everything except Snapchat, which is dying off and failing to turn a profit. Facebook accounts for so much presence in the US that they have login buttons you can integrate on different sites (Google does too because of how crazy cemented they are). Yet somehow besides being so obviously monopolistic and out of control they're hit with no antitrust. Bell was split up for doing much less.
Antitrust is just a joke right now. We have to get better enforcement first before looking to create regulations to be enforced.
Antitrust was neutered in the US by the Chicago School of Economics and the legal theories of Robert Bork (he of the Saturday Night Massacre and failed Supreme Court nomination).
Let's hope Lina Khan and the seeming bipartisan consensus that Big Tech needs taming are the beginning of an antitrust renaissance.
These are valid points but I still think it's reasonable trade off.
Yes, I believe the popups banners etc might be annoying, but all companies share this problem. And I find it hard to believe it really hurts a business with a valid product.
> If they're able to beat Google's lawyer army and actually prosecute them, then Google will take a whopping fine in the millions of dollars that'll be more than covered by their daily revs.
This is why the 4% of global annual revenue fine option exists. A few of those add up quick.
Furthermore, if organizations are explicitly accepting penalties to keep on violating the law, that law will be adjusted accordingly. It might take a while, but it will happen. Laws with the intention to change behavior / culture cannot "work" from day one. This is a continuous process steered by politics, courts, governments, and public opinion.
Penalties can be applied repeatedly if the violation continues. It's not a 4% lifetime cap, it's 4% per enforcement action. The DPA can just churn them once a week and then we're at 204% of annual turnover.
And I'm very annoyed that your initial reaction to reading this article is to blame the GDPR instead of blaming Google for these shady practices. Boycott that crap, move to other services. This shouldn't be acceptable.
I'm very happy that the GDPR exist, if only because it forced all these websites from explicitly giving me a list of the literally hundreds of partners they want to share my data with, along with a way to say "hell no". Of course Google and friends will try to work around it but hopefully that won't come to pass and they'll have to actually bother changing their crappy business model. I think the spirit of the law is fairly clear, I wonder why Google thinks this scheme can work. Maybe they're just trying to buy some time.
As for startups that sink because they can't be bothered to sanely handle my personal data: good riddance.
- my initial reaction is to blame GDPR, yes, because it's just security theater that does so little to actually ensure privacy. Sure Google is at fault but GDPR was supposed to regulate this and it is clearly failing to do so. And if you want to boycott it you're welcome to but they've built an empire with their cloud, search, email, etc to the point where that would be pretty difficult and annoying to the average consumer. They're effectively too big to be boycotted at this point.
- It doesn't explicitly force them to do that. And most sites aren't explicitly sharing the data either; i.e. almost every site uses Google analytics which doesn't really comply with do not track all too well, and Google will then share their data with everybody else (which is part of their violations in this article). Also saying "no" doesn't do much either as most of these big sites either already stored the cookie or won't do much to delete it. And it's not that they'll try to work around it, they either don't put forth the effort because GDPR is like a pebble for them, or they already worked around it in a way that changes nothing. Their business model is still the exact same.
I think the spirit of this law is fine, but the actual law does nothing and is just privacy theater. Google isn't buying time, almost 4 years later nothing has changed -- they just know they can't be touched.
- They're not going to sink, they'll just grow much slower and thus won't be an alternative to the big data abusers you hate so much. And they're not failing to handle your data, most of the time startups aren't selling you out they're just trying to figure out who their customer is internally. To do this they collect some data that is usually optional and very much with your consent; GDPR just puts a bunch of hoops in front of this so that it's an enormous pain to do so. I run a startup that collects basically no data (literally, we do not have a database for 2 of our products). It was a pain for us to become GDPR compliant because that disables our metrics entirely and requires a bunch of banners and checkboxes everywhere even though we literally store nothing.
I'm all for the spirit of the law. I just think the execution sucks and they definitely didn't think it through enough. I think the evidence for this is clear based on the sheer number of privacy violations we've had since GDPR was enacted alone, and how little enforcement and regulation has actually gone on.
We're not four years into the GDPR becoming enforceable (it was just last year), and certainly not enough time to see real action for complex cases - regulators and courts move slowly.
Startups barely ever asked for consent, and collect way more data than you imply - including, as you mention, by using Google Analytics.
The GDPR requires zero banners and checkboxes if you are not processing data.
> GDPR was supposed to regulate this and it is clearly failing to do so
How do you know? The fact that there's still crime doesn't mean that law enforcement doesn't do anything. Somebody reports an alleged violation, an investigation is started, and maybe the investigation will produce that Google is in violation (in which case a fine will "regulate" Google's behavior) or they are not (in which case it may be fair to criticize GDPR for allowing that behavior, or it may not be because the info wasn't correct).
> almost every site uses Google analytics which doesn't really comply with do not track all too well, and Google will then share their data with everybody else
Unless you have specific info, I believe you're mistaken here. GA is generally seen to be compliant if anonymizeIp is active and you're not pushing PII into it via customization. Google is, if I understand it correctly, not "sharing" GA raw data with anyone, but analyzing the data for their own research and providing the website owners with aggregate data (i.e. demographic information) without sharing data on individuals. I'm not a fan of GA, but I haven't seen any info that they're that obviously in violation.
> It was a pain for us to become GDPR compliant because that disables our metrics entirely and requires a bunch of banners and checkboxes everywhere even though we literally store nothing.
What was the specific pain? I get that having to add a privacy info sucks (and might cost money if you get a customized version), but I never found it to be that big a deal. If you don't store any PII, it's pretty straight forward, and so will the procedures be if anybody asks about the data you stored: just inform them that your systems do not store any data in general and also didn't store any data on them in particular.
From my (admittedly limited) understanding, this is not actually legal under the GDPR. Certainly the alleged (but not demonstrated) behind-the-scenes trading of personal info isn’t, but the shared id is also personally-identifying information, and directly regulated.
It is very not legal, but I think the parent was saying these regulations are more onerous to small dev shops rather than Google and the fine for this will be minuscule. Hopefully companies will find paths to revenue that do not require selling out there users to this level, maybe by just having ad auctions without any identifying information at all.
Like it or not. Currently the internet would not exist without the ad-supported business model. Regular users expect everything on the internet to be free. I know people that categorically refuse to buy a 1-dollar app. It‘s starting to change now with people getting used to subscription models (Netflix, etc). But it will take a while until we start paying for news again, for example. Apple news + and google news initiative are a step in the right direction. Even if it‘s just for aggregate sub management.
> Currently the internet would not exist without the ad-supported business model.
In it's current form, yes. However, I'm not so sure that everybody here would agree that "the web today" is fundamentally better than the web ten years ago, technological advances aside. Everybody smelling gold and starting a blog to mindlessly shill for products in hopes of getting a commission, super low quality texts written/generated/spinned entirely for SEO reasons to place ads between the paragraphs etc doesn't come to mind when I ask myself "what could be better on the web?" If those things disappear tomorrow, I don't think a lot of us would miss them. We'd notice them being gone because it might feel like being able to breath freely after a strong cold, but I don't think many would miss them.
> But it will take a while until we start paying for news again, for example.
Plenty of people pay for news. They won't pay for Gawker or Buzzfeed though. I don't think that's a problem for anybody not invested in or working for those companies.
"The more serious infringements go against the very principles of the right to privacy and the right to be forgotten that are at the heart of the GDPR. These types of infringements could result in a fine of up to €20 million, or 4% of the firm’s worldwide annual revenue from the preceding financial year, whichever amount is higher."
For Google that would be 4% of its worldwide annual revenue, I'd assume. Taking into account that it's not one infringement but multiple that could mean a pretty hefty fine.
The first GDPR fines handed down by the ICO have been hundreds of millions of pounds for negligent breaches - I don't think it would be out of the realm of possibility for breaches by _design_ to result in multi-billion pound fines.
Several billion pounds is still not much! They already broke GDPR once (or twice I think?) And received a 57MM$ fine.
57MM$ is nothing to Google. They've escaped even antitrust cases with minimal injury, it would be a truly shocking event if the EU actually managed to touch them.
This has always been absurd. Large companies have way more code and features in general which need to be checked for compliance, whereas small shops with small sets of data and features will have a far easier time complying with GDPR.
Large companies have the means to pay for the manpower (lawyers/consultants, developers, etc) to certify compliance with GDPR. Small companies often don't.
I paid $2K for my first GDPR consulting session for a $7K MRR app and was quoted ~$25K for consulting while I would personally implement what needed to be done. $25K is nothing for a large company, but it's prohibitively expensive for a lot of small companies. This cost also doesn't include the (probably hundreds of) man hours required to implement and certify GDPR compliance, which are also disproportionately valued when it's being done by 1 person in a <5 person company versus N people in a >5K person company.
Hopefully these costs will fall as more people become lawfully knowledgable about what GDPR entails and the market of people available to help grows. Unfortunately there's no "feel free to wait if you can't afford it yet" clause in GDPR.
Another issue is that as a small company you generally lack the resources to effectively contest violations. Google can, and will, drag these things out in court for years. And ironically for free. Their legal costs are going to be covered by inflation on the fines themselves. 2% inflation on a $1 billion fine reduces it by $20 million a year. And also factor in the interest Google is earning on that $1 billion on top of the 2% 'principal' reduction per year.
The whole penalty system is quite silly. The fines destroy small companies who are the ones struggling to comply, and do little more than offer extremely gentle pokes on the wrist for megacorps that have relatively unlimited resources available for complete compliance, if they actually wanted to comply.
Even from the basic point of view: People go to Facebook, Amazon and Google daily. They accept the GDPR privacy policy once. Every single other website is bombarding users with popups, so there's a far greater chance users will click off from a startup's website.
It's not legal but there isn't much the EU can really do. It would be shocking if they actually managed to prosecute Google which has so far avoided much hassle in antitrust and the like, taking I think a billion dollar fine which sounds like a lot but is basically a slap on the wrist.
That's why, IMO, GDPR sucks for small businesses that can be outed to the ICO for a minor oversight and not so much for big data abusers that can take on GDPR and come out unscathed.
That sounds like something fairly trivially avoided by having the punishment be proportional to revenue. And I believe this is already the case for GDPR?
EU have shown that it's willing to scale up the fines all the way if the company in question keep on violating the law. Alphabet global revenue 2018 was $136.8 billion, so the maximum fine is $5.5 billion which is in the vicinity of fines they've already received. It's a separate post in their yearly financial report. The gain must be significant if they continually keep violating the laws.
This is being quoted in every comment but if you have enough lawyers anything is possible.
Google has come out of antitrust cases relatively unscathed. They've even violated GDPR itself once before explicitly, and got out with a 57MM$ fine. This case won't be any different than all the other times that Google has blatantly violated laws and walked away with a slap on the wrist.
I would be very very very shocked if the EU actually managed to touch Google. I welcome and hope to be proved wrong.
I would argue that it's the same problem and the reason GDPR is privacy theater.
It's a lot of regulations that can be worked around and the fines are hard to and rarely enforced. There are a bunch of poster children of GDPR fines that make it seem like it's doing a lot but the principal abusers (i.e. Google) just walk away with a light slap.
It needs the ability to be enforced, and I think this much should be obvious to lawmakers -- a law that can't be enforced well is useless.
That's why I'm calling it privacy theater. It's the EU saying "look what we did!" but in practice it doesn't really do much without enforcement that still does not exist both at a national and global scale.
Targeted ads are already a serious leak of information.
If somebody looks over my shoulder and sees the ads presented to me, they can infer things about me.
Also, if a malicious actor targets an ad to a group of people, and some of these people buy the advertised items, then the actor can infer things about those people not necessarily related to the items sold.
At my last job the traffic was filtered through a proxy due to FINRA regulations. I’d see Portuguese ads for diabetes medication and there were 2 Brazilian guys in the office.
HIPAA only keeps healthcare providers from sharing your information. It's not an omnibus shield for your health information. If Alice tells her coworker Bob that she had diabetes, it's not a HIPAA violation for Bob to tell Charlie.
Are Bob and/or Charlie the name of a person or of a company?
How you're using it, it sounds like Bob or Charlie in your mind is a person. I might be wrong in interpreting it that way. If so could you give another example where Bob and Charlie are companies and the information of Alice is part of a transaction.
> If somebody looks over my shoulder and sees the ads presented to me, they can infer things about me.
You have to take some personal responsibility, though. If they saw your Youtube recommendations or your Spotify playlist, they'd probably make inferences as well. That porn link in your history you forgot to clear? Be aware of who's watching and browse anonymously if you're concerned.
I've had ads for things that I only just spoke about, out loud, to someone near me like a friend or family member, show up on a computer in a different country.
I've had ads for things spoken about show up in FB. I have more of a libertarian mindset, but that really creeps me out and I think speech-based ads be outright banned due to privacy concerns. It's not so much the ads; it's being recorded and potentially having those recordings leak in a data breach.
Or it's just one of 100 coincidences that happen to you every day.
Easy to prove, store a log of all your network traffic, and record all the audio you speak, then when you see a match, go back, find the proof, become world famous
It was widely believed for literally years until the Senate Judiciary and Commerce committee hearing in 2018 where Zuck called it a 'conspiracy theory'. Since then it has been dismissed as such. My question is - if I personally observed it before I even heard about this 'theory', and thousands of others around the world also observed the same thing, why are we dismissing it as a 'conspiracy theory'? Just because Zuck labelled it as such? Why are we trusting him to tell us the truth again?
I dunno man. This reminds me of the time that someone at defcon said they found a vulnerability in my last company's product because it flashes a WiFi password to an iot device instead of making a user type it in.
"What if we capture the flashes and steal the password?"
Well, if you're positioned to capture the flashes, you're definitely positioned to just watch me type it in...
Would you be ok with it if your browsers at home, in the office and on your mobile phone always showed your bank balance on the top of the screen in a large font?
I assume most users would not. But they would be ok with their bank balance being shown if they specifically opened their bank website.
Imagine someone giving a presentation to room full of co-workers and a web ad comes up saying something like "Resubscribe to Cannabis Weekly Delivery and get 10% off."
It's not hard to imagine a person's career being affected by something like that.
Yes, that's why targeted ads shouldn't be a thing unless it's opt-in (not necessarily my opinion but it seems to be the point the parent was making). At that point, to opt-in you can create a google account. Currently though, Google will attempt targeted ads on people without a Google account by trying to identify and track them through other means.
Ideally you would have site-specific or content-specific ads normally and personalized ads if you created an account and chose to opt-in.
My children tease me about "being a hacker", by which they mean unlawfully breaching security of internet systems, because they've seen me reading "hacker news".
There is a reason they didn't. They fear the US government's reaction.
Edit: Why downvote? Do you really think that the US government will stay silent if the European Union threatens with such fines? Political tensions are something you take heavily into account.
I'm _really_ tempted to write that they could use the fine to finance the information campaign, but I know that government finances doesn't work that way.
Governments are likely walking a much finer line than we might imagine. Imagine they carried out your idea. The EU is a political organization manned by a large number of mostly professional politicians. Google is world's largest data harvesting and advertising company whose products are used, on a daily basis, by a pretty sizable chunk of our entire species' population. Imagine if Google decided to fight back. Who would be able to create a more effective "information campaign"?
I can't help but consider one current "information campaign" in the UK. In response to skyrocketing violent crime they've chose to put anti-stabbing/knife messaging on fried chicken boxes, Literally [1]. Knife amnesty bins and fried chicken anti-knife messaging. That's a government "information campaign" through and through.
Consider Brexit for a minute. The most recent polls show support for leaving, but throughout it's been extremely close. However multinational corporations are universally against Brexit - a global world is a more profitable world. And these same multinational corporations tend to have a strangle hold on the places most people get their news from. This can be from the news agencies themselves (Disney owns ABC, Comcast owns NBC, Time Warner owns CNN, etc) but more directly also from the way that people get their news. For most people that is Facebook and Google. And these corporations tend to promote what is their own best interest. As a specific example CNN ends up being chosen for about 20% of Google's news recommendations. It's a deeply partisan site that's not uniquely popular and has a dubious track record when it comes to reliability. But their agenda and Google's agenda fit nicely.
Consider the two topics above, combined. The global media has nearly universally tried to condemn Brexit. And while media clearly doesn't have as large as an effect as some would like to imagine (Facebook was seeing an exodus of young users before any media outrage - it's become the social media site for your mom), it equally clearly does have at least some effect. And so imagine Google simply swapped their bias. And was suddenly now disproportionately promoting messaging come propaganda against the EU, in favor of Brexit, promoting things such as the yellow vests in France, the various leave campaigns gaining momentum in other nations, etc.
When topics, even with the media disproportionately on one side, are so close - if that media that people were presented suddenly started lobbying for the other side, that would have a massive effect. I don't think it's hyperbolic to suggest that companies such as Google and Facebook could effectively cause the EU to collapse if they so desired. It's already on somewhat shaky ground with near universal media support. If they don't play ball with the companies that direct that media, that ground very much stands to give way.
That doesn't mean the governments are completely obsequious to the corporations, yet, but it does mean that the corporations are also in no way obsequious to the governments. And I think this interbalanced relationship is one major reason that we see increasingly see governments reluctant to do anything that could meaningfully negatively affect mega corporations or other very powerful players. It's also why I see us gradually headed towards more overt corporatism. Corporations grow exponentially more powerful by the decade, and this shows no signs of abating.
GDPR enforcement is 15 months old and regulators aren't the fastest bunch. They're also cooperation based as the goal isn't to serve fines but to ensure compliance: If you cooperate, you might get away with no fine at all (depending on circumstances).
Also the 4% global revenue fine wasn't exercised yet because it's the maximum fine, and there needs to be room for escalation: hard to serve a bigger punch if you're already at the maximum.
The matching is in itself a violation of privacy, at least if you interpret the right of privacy as "The right to be left alone", as former Supreme Court Justice Louis Brandeis put it.
Yes, thats what I was thinking too. For Google, being in the ad business itself necessitates that Google's trajectory will be on shaky ground w.r.t privacy.
The original question was "Is there any way to improve the matching of ads to the viewer without violating their privacy?"
Your answer is that we should match something other than the user, that happens to correlate with user interests. That is, by definition, not matching ads to viewers.
In think either our idea of "by definition" or something else differs.
Viewers get ads matching their interests, as proven by the fact that they are on a related website. I don't see how that isn't "matching ads to viewers"?
Thanks, didn’t know about that.
I wish I had received a better feedback there, though, as I’m deeply interested in the topic and generally it’s quite hard for me to find negative opinions, counterarguments I could work on. I find negative feedback more helpful, if it’s constructive.
I think maybe the problem with the comment was that it started with "I think that’s incorrect" but it reads like a non sequitur. The comment to which yours was replying was claiming that matching ads to viewers is itself a privacy violation but your point seems to have been that it – matching ads to viewers – is unnecessary, which, altho related, is a different point and doesn't follow from you thinking that the other commenter's point is "incorrect".
I think you're right in that "site content" is a good-enough proxy for users/viewers/targets for advertising, tho I also readily understand why advertisers would always like more info with which to target their ads.
Yes! Contextual Targeting (target based on what I am reading) could work, although the industry seems to be clinging to Behavioural Targeting (target based on who I am). This will become more important for Open Web due to the 3p cookie constraints, regulatory changes etc..., but Google/Amazon/FB are less likely to be impacted.
In fact Contextual Targeting predates the current approaches, but it became less important once advertises/adtech companies started preaching the thinly veiled idea of using behaviourism to trigger conversions/less products.
Changes like this are slow to introduce due to technical and (mainly) structural/cultural issues in the Advertising Industry, but that’s a topic for an entire essay/series of blog posts.
Source: I work in AdTech and deal with privacy/the ethical impact of programmatic, content monetisation models. Opinions obv. mine.
Mind sharing which company you work for? I'm interested in scaling contextual, privacy-focused publisher targeting solutions.
On the PG/PMP side, the usage is obvious, but I'm also curious what it will take for publisher-provided data to be trusted in the open exchange environment where historically advertisers and DSPs have tended to not trust the publisher-supplied categorization.
Yes, we can do contextual targeting, but after that, what are the ways to improve upon the outcomes (from an advertisers standpoint) without violating privacy?
Depends on the outcomes/metrics they’re interested in. For instance, I genuinely believe that targeting based on content is less dangerous from brand safety/brand perception perspective.
How about conversions, sweet $$?
I think replacing audiences with a semantic targeting model (nlp) could perform almost as good (if not better). Behavioural Targeting performance is overrated (feel free to look it up, esp. CPM vs. conversions, it’s quite interesting, I’m on holiday and shouldn’t be sitting on HN anyway!).
Another, deeper point-how much advertising do we (or brands) really need? Do alternatives exist, are they expensive/hard? What problem does advertising solve? I know this sounds like a silly question, but I think it’s worth asking given the current technical landscape and the ethical impact.
Well.. I'm totally OK with the efficiency in online-advertising being low. My point was that after a certain point, there is no way to get a better ad-to-viewer match without violating privacy. Google was never on the right side of privacy, and given the business they're in, they never will be.
I think a large cause of impedance for engineers to understand the issue is: randomly-assigned ids don’t anonymise users, because you can still attribute an action uniquely to one user (even if you don’t know their name/personal details).
I think of it like the UID I get on a UNIX machine: it identifies me, and anyone with /etc/passed can get my name, and things that don’t have access to it can still see “oh uid 1099 is logging in again to play nethack”.
The ads you're referring to here are called display ads. Most of these ads are not about getting you to click, but about awareness.
It's the same thing as a full-page advert in a magazine that you flip through. For some readers, they will stop an read it, raising awareness to that brand/product/company etc.
I take your point about being "exposed" to the product, however it must be the case that ads want you to click them, otherwise clicking them would not do anything. Which is not the case.
A company spamming my eyeballs with visual ads to "raise awareness" does not get my money. I take particular offence to that, as it is my screen. Not a Billboard for example.
My point is proven in the HN website, where it enjoys a large readership, largely influenced by the sites clean, ad-free design.
Slashdot used to be like that until they started displaying ads, and that is what brought me to HN. Thank you HN for not doing this.
I've visited plenty of sites made by small developers who simply use them to try to offset hosting costs with a banner ad. Hosting isn't a negligible cost for everyone and content should be judged based of content, but you can of course judge a site as a whole with ads included.
At this point I'm just waiting for some of these tech companies to drop their analytics and drop targeted advertising, and just ask users for their advertising preferences or do advertising the traditional way... why do we HAVE to have targeted advertising? It's either hit or miss, or too creepy anyway...
> This, combined with other cookies supplied by Google, allows companies to pseudonymously identify the person in circumstances where this would not otherwise be possible.
At first glance, I would have thought this isn't a workaround at all. GDPR allows for pseudonymisation as a method of data protection. But, Recital 26 of the GDPR disallows this:
> The principles of data protection should apply to any information concerning an identified or identifiable natural person. Personal data which have undergone pseudonymisation, which could be attributed to a natural person by the use of additional information should be considered to be information on an identifiable natural person. [...]
That said, I don't think this is cut and dried, because Google themselves isn't providing the linkage to an identifiable natural person. The person that can make that linkage necessarily already has the identifying information. Get ready for a major legal battle.
It's a mix of the major players in the digital advertising industry. Historically the IAB has taken actions that are good for the industry overall. It focuses on standardization/interoperability in the ad-tech space. It generally isn't a watchdog and doesn't regulate the industry except when the industry as a whole would benefit (e.g. self-regulatory programs that have no real effect but stave off state regulation).
The article claims that personal data is shared along 2000 companies. As far I understand those companies do not receive personal information. I do not see real proof.
Regardless this is a very calculated PR move by Brave. And the privacy zealots and anti-google cadre on HN and elsewhere are eating this up. They are effectively giving Brave free advertising and playing right into their hands.
Accusing others of astroturfing and shilling breaks the site guidelines. You don't know why other users downvoted a comment, and jumping to public accusations about that degrades this site badly.
If you'd please review https://news.ycombinator.com/newsguidelines.html and stick to the rules when posting here, we'd be grateful. You've been breaking them a lot and we've had to warn you many times already.
We don't micromanage HN according to YC's particular investments in startups; what's in YC's business interests is to make HN as good as we can, because that's what keeps the community coming here. But if it helps, I don't think YC has any investment in Brave.
I hope you just do your job and prevent those systematic organized promotional posts by such companies on HN to get free advertising. The entire accusation in OP turned out to be invalid in the end as shown in many top voted comments. YOU know that this company with its fishy history (taking money on behalf of publishers without their knowledge or approval) and business model has been systematically exploiting HN and you're doing nothing but warning me. Banning me won't solve the problem anyway.
Most companies that produce articles in the hope of interesting HN are trying to "get free advertising" in the sense of attracting readers' attention. If the content is genuinely interesting to the audience here, there's nothing wrong with that.
If, on the other hand, they're using voting rings, astroturfing, or other forms of abuse to spike their ranking because the content isn't interesting enough on its own, that's bad, and yes we spend a lot of time counteracting that. But if you think that's happening, you should send links to hn@ycombinator.com so we can investigate, not post dyspeptic complaints about it in the threads. Those just add noise.
Hundreds of deflecting comments about coffee at McDonalds and astroturfing. Well done! Can we now talk about how Google uses creepy tactics to undermine privacy and the GDPR?
People are missing the point. Google is trying to manage GDPR and their previous business model which is selling all that user data. They are not going succeed and GDPR is going to prevail.
It's really funny to see that yesterday, I was branded as a 'privacy nut' after the release of Android 10 as I was concerned about the privacy issues that are in Android. Then the Go modules proxy issue around the Go Programming language that raised suspicions about tracking usage statistics around downloading modules turned on by default without any consent and now this.
I think there are some folks at Google who have just read too deep into both 1984 and The Google Book to go on to think that privacy violations like this is a normal thing. But what do I know? I'm just another 'privacy lunatic' on the net that wears a metal helmet (tinfoil hats are just not good enough) trying to protect my privacy.
The Go module hash checking seems to be more about avoiding the problems encountered by other language repos integrity and versioning issues (cough NPM), and in terms of tracking it seems about as invasive as Debian's popcon.
Enabled by default can and should be the default for security-related features.
I tend to agree about the rest of the creepiness, especially anything personally behavioral.
On HN it is taken as gospel that any information sent to a server will absolutely compromise your privacy. If anyone points out that the information is trivial or useless the rebuttal is instant - it can be cross referenced with other sources to build a complete profile of you.
If you want to know which Go modules I use, go check out my github. They're listed right there in import statements. If I'm hacking on a project that I want to keep private, I'll disable this feature with a command line flag - easy.
My issue with the paranoid folks in that thread is not that they made no sense (they can't help that), it was they were attacking the person who implemented the feature viciously. He had implemented a feature that a majority of Go developers had been requesting for 5+ years, had done it in a way that improved clean build time, improved security and could easily be disabled or replaced with a private DB. Literally what else could that man have done?
Even though all his work could be verified trivially (Go is open source!), they still chose to attack him.
No it isn't. Please tell me how you achieve security without storing hashes in a DB? The default is only for those who'd prefer not to run their own DB. You are welcome to run your own DB if you want.
Why are you so upset that other people will be using a feature you obviously won't? Why are you upset when this leaks literally no info that your github repo doesn't already?
In the default configuration, the checksum database currently has 657 level-0 tiles. The raw log data associated with each level-0 tile seems to be about 50KB. You could store the entire current checksum database log in about 30 MiB. You could store the level-0 hash tiles in 5 MiB.
This makes sense right? Cargo, the rust package manager, does replicate the entire package index using git and it is ~250 MiB (~150MiB in .git/) (The cargo index stores much more metadata in a more verbose JSON format). It seems like a very reasonable bet considering prior art.
Is there any meaningful difference between what google knows about your modules and what npm knows about js modules? Or what a Debian mirror operator knows about your packages?
I get that Google changed the way go packages work, but I don't understand the difference between that and literally every package manager in existence.
It isn't, but it's installed by default, and I'm always amused when I get to that installation step. "Do you want to tell us about your system?" answer no "installing popularity-contest"
Debian's popcon is really irritating because Debian actually uses it as a source of evidence... systematically eliminating those of us who keep it off for privacy/security reasons.
Let me get this straight. You have an opt-in way of telling them what you use, which you don't use and then get upset because your use isn't considered? What should do they do, send a surveyor to your house?
Sound decision making requires metrics. If you opt-out of metrics, you don't get to participate in decisions.
> If you opt-out of metrics, you don't get to participate in decisions.
Would you agree to a Democracy where the government put sensors in your house, then told you which candidate your vote would be counted towards from your behaviour?
And if you opted out of that, you don't get to participate at all?
Sound decision making requires reasonable metrics. To quote the lead dev of popcon when someone said that they couldn't recommend it due to specific concerns:
"If you deal with people with strict security/privacy requirement, you are correct to do so. I would do the same."
> What should do they do, send a surveyor to your house?
Not use data from a mechanism that's controversial with some of the user base to drive decisions? Reach out and ask people, instead of asking their machines?
> Sound decision making requires metrics. If you opt-out of metrics, you don't get to participate in decisions.
Sound decision making requires some kind of metrics and also sound reasoning. It doesn't mean that metrics have to be widely-deployed telemetry. There are alternatives, like doing studies "the old way". Moreover, it can be argued that a lot of problems on today's Internet stem from people vacuuming up whatever data they can find and divining decisions out of it. It's not how you do science.
> Not use data from a mechanism that's controversial with some of the user base to drive decisions? Reach out and ask people, instead of asking their machines?
That would require more time and money that could be spent elsewhere.
> Reach out and ask people, instead of asking their machines?
So you would have statistics biased by not accounting for people not bothered to answering these questions instead of not accounting for people not using popcon?
You'd have statistics biased towards people caring to express their opinions. You also wouldn't mistakenly assume that because something isn't regularly used it's also not needed.
You are free to setup your own DB if that's a concern for you. It's a totally justified concern, but this is just concern-trolling. Most people would use this either way. People with strong privacy concerns may setup their own DB, but they represent the minority (despite the heavily privacy-biased stance of HN users).
If you're in the minority, you should expect to have to do more work to get the right balance of security and privacy.
I am just criticizing collecting data under the veneer of security or creating a dependence at this point. These are two completely separate issues and the argument for security is used to justify questionable practices.
> If you're in the minority, you should expect to have to do more work to get the right balance of security and privacy.
It is exactly this false dichotomy that I am criticizing here, because it is completely baseless.
You are reading an article that hints to privacy violations.
You can only be secure if your privacy is protected. That is the causal relation between these two needs.
> I am just criticizing collecting data under the veneer of security or creating a dependence at this point.
This isn't a veneer. It's a signature check to ensure a package is what it says it is. Have you seen the hassle npm routinely goes through because of this? A centralized, trusted server is a sensible "default" setting. The fact that they offer a self-hosted solution is a great benefit.
> You can only be secure if your privacy is protected. That is the causal relation between these two needs.
Nonsense. I can have a completely secure system and everyone knows who I am. I can have an entire insecure system with zero means of identifying the owner and thus, remain private.
You aren't just conflating privacy and security, you are implying a casual relationship between them. Both are nonsense.
A reasonable debate about your privacy concerns cannot be had if you take the most extreme response and pretend it represents what you are or what everyone thinks you are.
You are assuming you were branded by a large swath while similarly branding an entire company that is clearly represented by many diverse opinions on diverse products. The debate needs more nuance lest everyone pulls out their broad brushes while also taking offense to receiving a fleck of paint. Conflating the Go proxy w/ Google's ad business is a perfect way to ignore nuance in the debate downplaying the real, tangible effects.
There were two faux-sophisticated takes. "See, I told you they were spying on everybody!" and "Everyone already knew they were spying on everybody!" Both were from the same group of conspiracy theorists who believed the US government was spying on everybody even after Snowden's massive leak showed they weren't.
EU was warning about Echelon since before 9/11. The Patriot act detailed everything Snowden "revealed" (we basically only learned the actual names of the programs).
There was genuine shock in Europe that Chirac, Sarkozy, Hollande, Merkel were all spied on by NSA. Yes, we were naive conspiracy theorists when we were saying "you know, they gave them the right to do it, so they are probably doing it". But it all fell on dead ears.
> There was genuine shock in Europe that Chirac, Sarkozy, Hollande, Merkel were all spied on by NSA
The USA (and all major powers) has spied on foreign diplomats and leaders since its inception. This should come as a surprise to nobody. You had earlier claimed that the NSA was spying on everybody, which clearly isn't happening.
I earlier claimed that the Patriot act allows NSA to spy on anybody, which is correct. And semantics aside when a court orders Verizon to secretly handle all the information about calls within the US or where one end is in the US [1], saying that they do spy on everybody is not really far fetched.
I can't believe we are having this conversation on this board, that is supposed to be populated by people who have read that kind of news a bit in depth...
> When a court orders Verizon to secretly handle all the information about calls within the US or where one end is in the US [1], saying that they do spy on everybody is not really far fetched.
If the data can only be queried with identifiers of people with a reasonable suspicion of terrorist links, that hardly seems like "spying on everybody." Subsequent oversight board reports showed that automatic querying of the data with identifiers not associated with people reasonably suspected of terrorist links was deemed illegal and shut down prior to Snowden's leaks. If they could spy on everybody, there would be no reason to shut down that program. https://www.lawfareblog.com/latest-nsa-documents-iii-governm...
> I can't believe we are having this conversation on this board, that is supposed to be populated by people who have read that kind of news a bit in depth...
I probably saw those movies at an age where I was too impressionable, but growing of with portrayals of US intelligence units like in Enemy of the State, I always assumed that they did spy on everyone (and didn't even think that this would be some absurd view, or that I was being clever with that thought).
10 years ago you'd be called a privacy lunatic when you said that website owners who used google analytics were just selling their users' data to Google. Same reflexion about those using ad scripts.
FWIW, it is very easy to setup the athens proxy and run it. You could even have it route traffic through Tor if you were so inclined. We use it and are a fan as is works nicely:
> It's really funny to see that yesterday, I was branded as a 'privacy nut' after the release of Android 10 as I was concerned about the privacy issues that are in Android.
What are the privacy issues in Android 10? Anything new compared to previous versions?
I think the biggest one is some bad wording on the marketing around the "share your wifi" feature, where it mentions that you can share "secure" passwords with QR codes now but doesn't mention that the QR code has the plaintext password encoded in it.
Other than that, it seems that privacy controls have been generally improved in Android 10.
Different types of Hacker News article trends to attract slightly nonoverlap different subgroups of people. You'll sometimes find different sentiments over the same topic under, e.g. U.S. politics, silicon valley startups, cryptography, or urban life.
> I was branded as a 'privacy nut' after the release of Android 10
You were doing baseless accusation on something you don't know anything about. Your comment was basically "they market it as being better to handle privacy thus they clearly are worst with your privacy!".
You didn't raise anything substantial at all in that comment.
Wide-ranging thoughts and examining what-ifs makes you a curious individual. Failure to counterbalance confirmation bias by critically examining and deconstructing your own hypotheses makes you a crackpot.
The main difference between the two is just which paranoid thoughts you give credence to.
This is exactly what happened in the McDonald's "hot coffee" lawsuit. It wasn't some "Karen" who hit a bump while driving. It was an elderly woman (in her 70s, IIRC), sitting in the passenger seat.
McDonalds already had complaints (and some lawsuits) over the (significantly higher than industry standard) temperature of their coffee, so this wasn't exactly out of the blue.
She ended up with 3rd degree burns on her legs and crotch. She asked only for her medical bills to be paid. McDonald's refused, so she eventually took them to court. Even then, she only asked for medical bills (and now legal expenses).
The jury decided that McDonalds was not only liable for those costs, but had treated the woman so poorly that they should pay punitive damages. The massive amount you heard about in the news was based on the amount of money McDonalds makes selling coffee in one day.
But that's not the story that was spread by the shills...
The most interesting thing to me about that McDonald's "hot coffee" lawsuit is how different the narrative is in popular circles vs. in legal circles. It's a little bit like in Rashomon.
In popular circles, the story is framed in a way that does make the lawsuit look frivolous. But this is also a case that has made it into the legal textbooks as an example of corporate negligence that's clear-cut enough to use for demonstrating the concept in introductory textbooks.
Of course, in the legal textbooks, the version of the story that's told includes a lot of details that, as you point out, get excluded from the popular version.
The reason the McDonald's lawsuit is framed that way in popular circles is because it is easily fits the mold for a frivolous lawsuit.
1. A woman willingly purchased hot coffee from McDonald's.
2. She spilled it on herself.
3. She sued McDonald's because of her injuries.
Those three things are true no matter which circle you ask.
A lot of people I talk to, even when given all of the facts of the case still consider it a frivolous lawsuit because they place almost all of the blame on the woman.
I have a feeling that's more to do with some people's reluctance to admit they were wrong about their initial conclusion, and so they cherry pick the facts to support their initial conclusion.
It's mostly harmless to spill regular drinkable-temperature coffee on yourself, so she took appropriate level of precaution (i.e. very little) for that reasonable assumption. She shouldn't be to blame even if she showered in the coffee.
To bring it back on topic: when web users are told "we care about your privacy", they should be able to take it at face value. When the company then weasels out of the headline promise on technicalities (we don't share cookies — we share JSON, haha!), then it is violating that promise.
Yes, once again, when an issue is systemic (in this case, all coffees by McDonalds are super-hot), the user is not to blame but the vendor/system designer.
It reminds me of that recent case of 90%+ passengers "putting the oxygen masks wrong" because they are "idiots" - an opinion that seemed to be shared by a lot of people even here on HN, but even more so on Twitter.
To me, someone whose never seen an airplane oxygen mask up-close, it's pretty clear that the design of the masks would make people think that this is how you use it, and not over your nose. When you're going down with your plane, fear for your life ending, and have like 30 seconds to act, it's like not you have time to think about the "proper usage of an oxygen mask" - you use it instinctually, based on the design of the thing.
Therefore, those people were not "idiots". Idiots were the people designing the masks in a way that doesn't make sense for humans in an extremely high-pressure situation with little time to react.
Also, as usual, when the vast majority of people do something "wrong", that's the first and only red flag you need to know that the issue is a system design flaw, not a user error one. Same with Apple's "you're holding it wrong" antenna issue, etc.
Why the hell does that article not include a photo or diagram of a mask properly worn? Even despite knowing that it goes over your mouth and nose, the design of the mask makes it difficult to understand how one would actually do that. Seems like a huge, huge omission.
It seems to me that the safety cards in the seat pockets show the the masks correctly worn, and the cabin crew demonstrate the proper fit when going through the routine before takeoff.
But I agree with you about high-pressure situations and not ascribing idiocy.
Hmm ... I have never though about the masks and just assumed they would cover the nose "automatically" and I would maybe do the same thing.
Which such small masks, people probably believe they are doing something wrong when they are not tight against the skin (like a gas mask or scuba mask) and then puts them over the mouth where they fit tightly.
The mask people probably have experience with are scuba mask or gas masks and they both need to fit tightly.
Well, when you purchase coffee at McDonald’s and you see that it’s super hot, you can return it or cool it down by blowing on it before drinking it. Not commenting on the legal case, but it seems nuts to blame someone else for spilling hot coffee on yourself, no matter how you slice it.
"Hot" is meaningless in this context. People take "hot" showers and it turns their skin red, it doesn't cause serious burns that require surgery. You can absolutely blame someone else if the coffee you spilled on yourself was served at dangerous temperatures. Also, you can't "see" that it is dangerously hot before it's too late. A lot of things have steam coming off of them and don't require hospitilization if you spill it on yourself.
Well, when I buy coffee or tea, I usually can feel how hot it is by touching the cup. My local coffee shop serves it super hot, so I take the lid off and blow on it until it cools down. It simply would not occur to me to sue them if I injured myself with their hot coffee. I prefer it hot in any case, since you can cool down super hot coffee pretty easily. As I said, I’m not commenting on the legal case. The woman could have very well had a viable case. I’m commenting on the culture.
As a thought experiment, let’s replace “really hot” with “poison”. Do you think it’s reasonable to blame the customer for inadvertently drinking poisonous coffee from a chain when all other chains sell poison-free coffee? What if it comes with instructions like “just wait 20 minutes and the poison will evaporate”—does this sufficiently clear the company in your view?
How is unusually hot temperature significantly different from poison in this situation?
Good point. Though I think there’s still room to distinguish an uncomfortably hot coffee spill from a third degree burn causing one, and it’s not immediately obvious to a casual consumer when you cross that threshold.
I'd suggest just reading about it instead of spewing ignorant nonsense, this is the whole problem with this exact example, it takes like two minutes to read but no one ever does:
> Liebeck’s case was far from an isolated event. McDonald’s had received more than 700 previous reports of injury from its coffee, including reports of third-degree burns, and had paid settlements in some cases.
> Mrs. Liebeck offered to settle the case for $20,000 to cover her medical expenses and lost income. But McDonald’s never offered more than $800, so the case went to trial. The jury found Mrs. Liebeck to be partially at fault for her injuries, reducing the compensation for her injuries accordingly. But the jury’s punitive damages award made headlines — upset by McDonald’s unwillingness to correct a policy despite hundreds of people suffering injuries, they awarded Liebeck the equivalent of two days’ worth of revenue from coffee sales for the restaurant chain.
> The chairman of the department of mechanical engineering and biomechanical engineering at the University of Texas testified that this risk of harm is unacceptable, as did a widely recognized expert on burns, the editor-in-chief of the Journal of Burn Care and Rehabilitation, the leading scholarly publication in the specialty.
I had a coworker from the Netherlands over recently. I think by the end of the trip he thought I was making stuff up trying to correct this misconceptions. I do know he never believed me about the hot coffee lawsuit.
Sample set of 1, but it was odd for someone not from America, and indeed it was his first time in America, to be as misinformed about things. Not knowing things is fine, but knowing wrong facts, and refusing corrections, was just weird.
It was an eye opener for what I "know" about Europe.
This definitely happens with some narratives. For instance, some news outlets report that some parts of Europe are no-go zones for non Muslims, where Sharia reigns.
Honestly, I don't think about Europe that much. America is huge (both geographically and population wise) and there's so much stuff going on nationally that there isn't enough bandwidth for most Euro stuff.
"Brexit seems like a mess" and "Boy the G7 didn't seem to go well" are the most recent thoughts I've had on Europe.
Mark me down as a person who considers it a frivolous lawsuit. And yes I've read the background.
She balanced the cup not in a cupholder, but between her knees. Even normally hot coffee is going to be bad news if it spills like that - the only difference the increased temp made was that the burns were more severe. But the coffee would not have spilled at all if she was not negligent in handling it (i.e. she should have given it to a passenger to hold or put in a cupholder).
If the coffee had been incorrectly sealed and it spilled out during normal operation of the cup (for example, drinking it) then I'd blame McD. But I don't see why it's reasonable to blame McD just because the coffee is 10 or 20 degrees above normal temp.
They discuss the car not having any flat surface to place the cup. Putting a cup between your legs in that situation is pretty common, if not foolproof.
She went into shock and had to be rushed to the ER. 1/6th of her body was burned. The coffee was 30 degrees hotter, putting it closer to boiling than to the average temperature other machines served at. I dipped my hand in a fresh cup of coffee at home just now for reference. I could hold it fine for 3-5 seconds (which should get me close to a third degree burn with McD).
They also had over 700 reports of previous burns -- doesn't that sound like a systemic problem? The jurors, after seeing images of her injuries and reports of previous injuries, increased the punitive damages. I think the amounts are telling:
Spills are a normal and expected occurrence in the course of life. I've spilled coffee on myself before, but I've never needed 8 days in the hospital as a result.
It is also worth noting that the court in this case did not assign full responsibility to McDonald's; your point was certainly considered in this case.
Are you a lawyer? Because this take is quite remarkable, especially given that the overwhelming public sentiment is that McDonalds was heinously negligent, coupled with a lot of supporting but not entirely factual claims to justify that position. I feel like the same people who were jeering at the victim just marched over to sainting her and demonizing McDonalds.
The Internet extreme position machine. Everything has to be clear cut.
How McDonalds no longer engages in "corporate negligence": they put pronounced warnings on the cup that it's a dangerously hot substance. That's it. They did not lower the temperature (as is frequently claimed, nor is the temperature at all outside of normal industry standards, yet this is being repeatedly stated throughout this thread -- coffee, brewing with boiling water, is hot). You can get a searingly hot cup of coffee from most quick-serve restaurants today depending upon how freshly it was brewed.
This is a case where the solution is more warnings on things.
This case was, however, an example of bad brand management, and perhaps throwing good money after bad for something they could have privately settled early on.
This is certainly not a hill I want to die on, and generally arguing against the prevalent opinion (which is overwhelming the one that you and the GP have expressed, albeit almost always positioning it like it's contrarian) is self-defeating, however this whole case is fascinating in how public perception shifts.
I don't know what makes you so certain that general sentiment is anti-McDonald's in this case. I have only ever heard the story given as an example of the frivolous lawsuits brought by overly-litigious Americans. It makes a good example of that ("they sued over hot coffee?"). The alternate framing, that McDonald's made coffee too hot, doesn't make for a very noteworthy or pithy story, so I'm skeptical of your take on the public opinion.
> nor is the temperature at all outside of normal industry standards
According to the facts presented at the trial, it is, at least for the fast food industry. All other fast food chains were found to serve coffee at about 140 F.
> The Internet extreme position machine. Everything has to be clear cut
aka compression machine. optimized to trigger brains' reward circuitry for accomplishment by 'tidying up' unmanageable landscapes of disjointed data into easily stored and recalled bimodal silhouettes of same.
The standard temperate you brew coffee at is 200F, less if at high altitude. McDonalds coffee is served at 180-190F and brewed at something higher than 200F and lower than boiling.
Typical coffee serving temperature is 155-175F. I'm considered unusual among my friends for being able to sip 170-180F coffee. 190F is just way too hot.
205F is the ideal brewing temperature, and is just a snip under boiling (212F). Serving temperature is entirely a function of waiting time from brewing, and is commonly between 165-185 everywhere. For take out shops it tends to be higher as there is the expectation that users have a delay before they drink it, and that many add cream or milk.
Again, if you buy a coffee at McDonalds today, or at Starbucks, or virtually anywhere else, you'll get a coffee that can be as hot or hotter. It's a hot beverage.
Sadly, from what I've observed over the years, "McDonalds hot coffee" is usually the most immediately-referenced example when the topic of "frivolous lawsuits" comes up. Usually in the context of someone joking they're going to file a lawsuit about some silly thing that would be totally ridiculous to pursue. "Someone spilled coffee on herself and then sued McDonalds when it burned her", as if that's somehow frivolous.
FWIW I think the temperature of coffee is way too hot at most places. It drives me crazy. If I can't drink it immediately after them handing it to me? Too hot!
Normally coffee is not brewed on boiling water, and losses a lot of heat during the brewing process. So, McDonalds' coffee (and the ones on many more stores) is significantly hotter than normal coffee.
But recently brewed coffee is dangerous anyway, it's quite stupid to hold it with one's legs. Still, I don't know about the details, and even the defamation she suffered might be enough for the punishment.
I don’t know how fast food companies brew it, but frequently coffee shops who do drips and care about the flavor more insist that the most important thing in drip coffee is a temperature that’s only around 10°F/5°C under boiling.
Of course if they do brew that, they won’t give it to you before either adding a bit of cool water (most places prefer to brew strong and then dilute) or letting it cool down (if they just insist on a strong-taste brand).
Coffee nuts are a fan of really high temperature in my experience. They just know that you can’t taste the difference if your first sip burns off your taste buds.
The grounds and equipment are not at similarly high temperatures, so the coffee going into the cup is significantly cooler than the water being poured.
IIRC at the time the McDonald's stated rationale was that by serving at that temperature it would be at a good drinking temperature when people arrived at their offices. These days that seems ludicrous to me.
> especially given that the overwhelming public sentiment is that McDonalds was heinously negligent
Citation needed. Anecdotally, every person I ever talked to about that case either had a profound eye roll or a shrug about how people will sue over anything.
If I bought a hammer and smashed my finger with it, should I sue Home Depot for selling me a hammer that was too hard and not warning me about it? That’s pretty much the basis of the McDonald’s case: they sold a hot beverage, some lady sticks it between her legs and it spills. A reasonable person could have foreseen that a hot beverage, stored in an insecure manner, could pose a spilling hazard and that hot liquids have the potential to scald. A woman in her 70s has been around enough hot liquids in her life that she should have known the risk of scalding. None of that is McDonald’s fault. It was just easy to manipulate the jury into the award by using emotion and painting McDonalds as some uncaring villain. That case was frivolous— but since it made it to the jury, the old lady won and now we all get to pay more when we buy coffee from restaurants to cover the higher insurance costs restaurants must now pass on to the customer.
Its actually even worse factually...its not just the coffee was hot and McD's had prior complaints, there are industry standards, which McDonald's knowingly and willfully violated.
It actually all relates to trucking/truckers who use their own special cups, but demand the coffee be scalding so it stays hot longer on the road. McDonald's catered to these truckers to grow their coffee business, they knew the risks and gave no warnings to customers. I think the employee was also negligent in securing the lid leading to the spill itself.
As you say the coffee was so hot it scalded this woman so badly through her jeans she needed skin grafts. And sure enough there was massive money spent to spin this in the media as "woman burned by coffee sues for millions, everything wrong with our litigious society." Where this should have read: giant corporation disregards safety standards to boost their bottom line to the detriment of its customers leading to boiling coffee that scalded a woman's privates requiring skin grafts, then in bad faith the insurer denies the woman's claim to pay her medical bills, drags her through court which they lose, then they drag her through the appeals process which she didn't want any part of either.
And the money went to Shriners' Hospital, not the burn victim (I saw the photos, they are beyond horrific). What's more, McDonalds had been advised their coffee temperature was dangerous, but decided to ignore this for marketing reasons, because people like the look of steaming coffee.
From a UK perspective I'd always thought of it as a case of crazy pay-out. Why does this one women get several million dollars (although googling reveals this number went down a bit on appeal)? Especially if others had a similar experience. Either there should be a limit on the temperature of hot drinks, or there shouldn't.
The fact a warning is deemed adequate seems weird too, people will actively try and avoid spilling a stone cold cup of coffee on their lap!
It feels to me like the internet moved from shilling for big burger to shilling for big law firm.
The payout is because _without it_ companies wouldn't change what they do. The punitive damages serve as an "example" for other corps to not do the same thing or risk losing that much money, too. If anything, I think punitive damages for corporate negligence would be _higher_. Maybe some of the payment could be directed to funds for victims instead of all going to one person, but that person (and their lawyers) also had to combat a multi-million dollar defense team.
>But that's not the story that was spread by the shills...
I'm not sure that was the case at the time... specifically there being "paid shills" on the internet.
Concern / outrage over the legal system seems to be a bit of the human condition, I'm not convinced there was a conspiracy required to have folks retell an inaccurate story about that incident.
Sometimes it feels like anything that is perceived to be corporate friendly is called a "shill" as if individuals (mistaken or not) couldn't organically have X opinion.
McDonnalds didn't hire any internet shills or anything like that. The media bought into their lawyers public claims about the case without bothering to investigate anything, and the TV talk show circuits picked it up.
Unfortunately, you often don't need to try particularly hard to get the media to repeat false claims.
How did the coffee spill? Did she hold a cup of a beverage that is traditionally served very hot between her legs?
That lawsuit was frivolous. Coffee is hot. If you spill it, it might burn you.
As far as just asking for medical bills — that might have been true until her lawyers smelled a multi-million payday.
Lawyers like that are the reason we have warnings on hair dryers to not use while showering and warnings on coffee cups that warn of the hot beverage inside. Coffee is traditionally made using boiling or near boiling water — unless McDonalds super-heated the water, the temperature at which it was served was consistent with how coffee is normally served.
That it was an elderly lady makes no difference. That’s just emotional distraction and has no relevance. The McDonalds lawsuit ushered in the “nanny” era where people have to be warned against such hazards as “boating carries a risk of serious injury or drowning.” And product costs (including medical care) is dramatically more expensive than it should be because the trial lawyers had to all get paid. Punishment under statute is one thing, but unlimited punitive damages is absurd. We talk about CEOs getting paid to much — not even close to the payday trial lawyers get.
This is a bad and frankly cruel reading of a lawsuit that addresses systemic failures of a corporate entity that had been previously alerted to a design and practice failure of their operation. Similar lawsuits regarding coffee at 179 deg F, rather than 190 deg F, have been thrown out. Almost as if there was an actual problem with what McDonalds' was doing, you know?
That corporate entity had decided it was cheaper to not fix it than to fix it, until a human being who they contributed to significantly harming brought the problem before a jury. And she was found partially responsible for her own injuries, a fact which you elide (while speaking so authoritatively that I must assume you know this to be true, otherwise you wouldn't, yes?), while the jury also found that McDonalds' had been reckless in their operation--that most of the money awarded was a punitive penalty assessed because that corporate entity was cavalier in their treatment of actual, real-not-fictive people.
While we're at it, the punitive award granted was $480,000, not "multi-million"--though the jury asked for it, as two days of McDonalds' coffee revenue, and was reduced by the judge.
You've either been had, or you're trying to do the having. I shall not speculate on which.
Its pretty obvious you've not read up on the facts of the case and have jumped to a few conclusions. Nobody contests that hot coffee causes mild burns, hot coffee that causes third degree burns in seconds is another matter entirely.
Companies do actually employ shills to go on forums and try to sway public opinion. They call them something like public advocates, doesn't change the idea though.
I'm no marketing expert, but I doubt that the best way Google can think of to sway public opinion is to call people privacy nuts. Google employs over 100,000 people and even more people work at similar companies like Facebook or buy targeted ads. Many of these people browse Hackernews and get offended if others accuse their livelihoods of being evil.
I believe those policies exist. I don't believe they are followed by everyone, and all it takes is a vocal minority. I've seen plenty of comments on HN from people who say something like, "Hey, I'm an employee of X, this is my own opinion, but..."
I regularly am discussing Google's business with Googlers on here. Some of them disclose in the post, some of them disclose on their HN profile. Many do not, and calling them on it is generally discouraged, but it's usually not super hard to figure out or prove.
There are definitely some acting as spokespeople (even if they claim otherwise), but there are also quite a few who are just, you know, incredibly biased fans of their employer, who like to defend them on topics unrelated to their job.
But when a tech company has 100,000 employees, it's not particularly surprising to see a good number of them on a tech forum. ;)
Or that they start every comment with 'I'm an employee of X'
They might just say, 'That seems silly!', 'You're a nut', 'Do you have a source other than yourself'. Or they might just downvote stories that have a negative effect on their income.
You've got 100k people working at google and many of them are going to be active on this type of forum. The same goes for the other big guys that have a ton of employees
There are regular comments by declared employees of quite a few companies including Google on HN. The commenting and vote/flag brigadeering around subjects that center on one of these companies is such that many subjects are either voted off the homepage entirely or the participants in the thread get downvoted/upvoted to better support the company position or narrative.
Whatever else it is, it's one of the more important guidelines of the site (at least judging from the time Dan and Scott spend on it) that you not make appeals like this in public comment threads. What evidence you hsve of this, you should send to hn@yc.
How is that evidence of "vote/flag brigadeering around subjects that center on one of these companies [...]" by those people? This story sat on the front page for a day, has zillions of votes and comments and as far as I can tell, even people who work in the field can't actually figure out what it's about.
I doubt the vast majority of employees would get offended that easily. I work for a tech company providing marketing tools (basically, I help send marketing emails (aka spam)), I like the pay, the overall environment and the technical challenges, but I'm under no illusion that I contribute to the betterment of our societies.
If you see these kind of behavior online, it's almost always people paid to explicitly monitor forums and "protect" a company image.
I can attest to that as when you point of things like Google gets caught stealing IP from a MIT student they invited for an interview (a popular thread here on HN) it gets downvoted voraciously.
Either its employees, hired guns and or something...
You spammed an HN thread with these comments before, which amply explains why you were downvoted.
We've been over this topic a gazillion times on this site: https://hn.algolia.com/?sort=byDate&dateRange=all&type=comme.... Astroturfing exists, but imagined projections of astroturfing are far more common, and whatever the effects of the former may be, the latter has an extremely degrading effect on discussion. In fact it eats up everything if you let it. We don't want that here.
I've personally banned several accounts over the years for propagandizing for Google (for whatever reason—it's impossible to distinguish between a paid agent and an overzealous fan, nor does it particularly matter). But this has been rare, and we don't do it without evidence.
Are you saying I am not able to discuss this true story where it's relevant? In threads where people have discussions that hold Google in a negative light cause they choose to act that way and get caught? My real life story along with the MIT's student story in which there's clear evidence of Stealing IP further details more bad behavior by the big G!
Is Hacker News not a place where freedom of speech is welcome? Speech that is helpful to other not well connected innovators who when invited by the big G read this and tell the big G no I'm not going to take that meeting!
Overall I'm confused as to why you call my story and Jie Qi's story spam? Do you not think her story is true and or mine is? That is not helpful for other not well connected and or rich innovators?
The issue is that your comment broke the site guidelines by insinuating astroturfing without evidence. That's the main thing you're not allowed to do here, as I explained, and have explained ad nauseum before (https://hn.algolia.com/?sort=byDate&dateRange=all&type=comme...).
Separately, there was an issue with your copy-pasting the same comment about this story repeatedly in a previous thread. That is what I called spamming (not the actual story). Doing that is no doubt what led to your being downvoted, rather than manipulation as you implied above.
In cases where the story is relevant, of course it's fine to discuss it. But I don't see that it's relevant here, and your history of bringing it up in ways that break the site guidelines suggest that the bar for relevance needs to be higher than how you've been drawing it.
But the main point is that you can't make up sinister stories about why you got downvoted ("I can attest...") and post them here. Long experience has shown that nearly all such comments are baseless, and as I just said, they eat everything if you let them. Therefore we mustn't let them.
Ok I understand and agree with the spamming recently.
Ive never knew or heard about an astroturf guideline before. Thanks for pointing it out ... i was trying to start a debate as I wasnt sure if my comment was downvoted voraciously because of multiple posts or something else?
As for telling my story and or Jie Qi's similar one in posts that detail other bad behavior from Google I wonder how that isn't relevant? Per the upvotes HN readers must feel it holds relevance? Though you don't think pointing out that Google steals/gets caught stealing IP is relevant to other posts detailing more/other examples of their bad behavior?
Correct, for the most part I don't think that, because it tends to make the threads more generic. For example, in this case the topic was about GDPR and identifiers in real-time bid advertising. It's fine if that leaps a single hop to, say, privacy issues. But if it leaps more than that, to every bad thing that Google has ever done, then the discussion will simply be "Google bad" vs. "Google good", which is far too generic to be interesting. It's a bit like what happens when you blend too many paints together. Everything becomes the same shade of brown. And this problem gets much worse when the things people bring up are predictable, i.e. have been repeated from the past. That guarantees a predictable discussion, which although it may be very agitated, has zero curiosity in it—and curiosity is the whole point of this site (see https://news.ycombinator.com/newsguidelines.html).
Yes curiosity is indeed what this site is about and if you see below HN user "philwelch," was interested & curious about Ji Qie's story as he wasn't familiar with it. I pointed it out to him and satisfying his and other HN readers interest & curiosity it received 14 upvotes. Aren't the upvotes saying the information was useful, their interest and curiosity was satisfied?
Though are you saying I should never talk about Google getting caught stealing IP where it's relevant on Hacker News(have you censored other members b4 on certain topics)? It does not satisfy readers curiosity or is helpful to other dreamers/innovators, ones who dont have Joi Ito (MIT student's advisor) on their side, in whats the best way forward when Google R&D sends an invite?
Ive been a HN reader and commenter since 2007 and spoken freely since. I hope I am to continue to do so!
Or maybe that particular story didn't really stand on its merits. I never saw the thread, but from my perspective (which is increasingly skeptical of Google), any sort of "stealing IP from interview candidates" theory doesn't make much sense.
I saw that happen to you when you pointed that out in the big Anthony Levandowski discussion a week ago--but I think a lot of that had to do with you posting essentially the same comment at three different places in the same thread. I don't think it started getting downvoted until then.
I have noticed this pattern over several years. A few of the times it trips the flamebait detector and gets buried. Negative stories about other big tech companies don't seem to have the same issue with staying up.
Could also be selfishness, I remember reading an article about leaked internal Facebook employees where some employees would pressure others to not talk about privacy or sensitive topics because it would hurt the stock price further and therefore hurt all employees. I can't seem to find the article. I can see employees of big companies trying to sway or minimize decisions to feel like they are protecting their future earnings.
I'm fairly confident China (and for that matter Russia, but we all knew that) do this.
As for tech companies, I would honestly mark a lot of that down to fanboyism. Not that I'd be entirely surprised, but I somewhat doubt that Elon Musk, in particular, is actually paying all of those people. And Tesla seems to have the most defensive supporters on the internet. People talk about Facebook and Google for their privacy practices, Apple for their crappy laptop keyboards, but it's a tiny minority that goes after Tesla when they're the company least capable of hiring people to argue on the internet.
(I would also assume that it's a little bit overrated how often this happens, for the sole reason that I spend a lot of time arguing on the internet and never once have I received an offer to go pro!)
I used to read a Financial Times article from either 2011 or 2012 (can’t remember exactly) where some Israeli military person was confirming that Israel was using what we now call shill accounts (he used a different, more pleasantly sounding term). If Israel does it I’m pretty sure Russia and China do it, too.
(I don’t have a direct link to said FT article, just a photo I took of said military person’s quote which is somewhere on one of my computers)
> As for paid shills, there is an entire sub-industry of PR called "Reputation management" devoted to this.
Sure. But there are also fanboys everywhere. Between presumption of good faith, Occam's Razor, Hanlon's Razor, and the breadth and depth of fanboyism I've seen online, my default presumption is always "fanboy" over "shill". Similar to your point about nationalists.
I would guess that those are, more often than not, not paid shills but rather just people who like taking devil’s-advocate sides in arguments “for fun”, i.e. just for the challenge of seeing if they can construct a valid rebuttal to a comment. (I say this because I believe I’ve done this myself more than once, and I know my personality-type isn’t uncommon on HN.)
How about people who simply are commenting against what they consider to be bullshit?
I've commented here defending China on some issue a few times in the past. I have no ties with the country (except being on a business trip in Shenzhen for two months), nobody paid or even offered to pay me to do it. It's just I hate bullshit, especially when it becomes popular opinion. Just because China has serious issues doesn't mean it's the Devil and everything it does is evil. It's a country, not unlike Russia or the US.
(Particular things I've been addressing in the past are a) treating a billion+ population as if it was a small country, and b) blaming China for all cheap mass produced crap and its environmental impact, conveniently forgetting that they aren't doing it for fun, but to fulfill orders from Western companies.)
Really confused about b) as I see it, a nation must be its own steward and we must all be stewards of the world. My understanding is that China is environmentally devastated. If that’s true it is completely the fault of the Chinese, not the fault of those that asked of them to do it. Other countries have stricter rules, so companies choose not to do business there. Especially in developing countries that limits growth, but the local environment benefits. Also the factory orders were coming from multinational corporations, they have no real allegiance with the West, they disrupt Western political, taxation and legal systems just like anywhere else.
Moreover, consumers don’t realize the damage they are causing if you let a corporation completely externalized the associated costs,(this is not a uniquely Chinese problem) and markets won’t pick the correct winner in those conditions either, who cares if $CHEAPPLASTICGOOD will last a tenth as long if it’s essentially free? China has not only enabled western consumerism, it disrupted the natural economic forces that would prevent such a thing from arising in the first place. To the detriment of the both parties and the world, all because they didn’t respect their environment and thirsted for power. Please explain how this rational is wrong.
Your points are fair, and I'm not saying China isn't directly responsible for the environmental costs of both their internal and export manufacturing. But it's not fair to shift all of the blame, as some naïve or disingenuous people do, because most of the export manufacturing is either managed by and done for western companies (China being "the factory of the world"), or simply driven by the demand on western markets. In other words, their environmental impact is in large part caused by our consumerism. And if China decided to no longer serve as the world's 3D printer, you're correct that some other nation would probably take up that role - and those same naïve/disingenuous people would then decry that nation for all its environmental problems.
In other words: it's bad that China is offering, but it's also bad for the west to take them up on that offer.
> Also the factory orders were coming from multinational corporations, they have no real allegiance with the West, they disrupt Western political, taxation and legal systems just like anywhere else.
That's IMO just giving up. Corporations are not yet independent governments, they should still be taken to account in the countries they originate from and those they operate in.
> China has not only enabled western consumerism, it disrupted the natural economic forces that would prevent such a thing from arising in the first place.
Not sure what they disrupted. It looks to me they played by the books, simply arbitraging labor costs like everyone else does in a global economy. Still, it's a feedback loop - China is enabling western consumerism, but growing consumerism is driving further growth in environmentally unsound manufacturing, most of which happens in China. If China did not enable it, probably some other nation would. It's a feedback loop, and both sides of it are to blame.
>If China did not enable it, probably some other nation would.
I’ll point the finger at any country who lets their national environment(which is really a global resource, we all share the same air and oceans) get trashed in the name of industrialization to serve consumerism.
Western countries have a system that controls those externalities at least to some extent, in the US we have the EPA, OSHA, and our various institutions of permitting make sure of that, at a very real cost to businesses and consumers. By not caring about the environment developing countries absolutely are burdening the species long term. The lack of a level playing field in this regard enables consumerism because you are allowing consumers to not pay for externalities, else we should really be placing tariffs on any country who doesn’t meet at least EPA regulations and probably all of our regulations and laws—building, safety, worker compensation—but that would likely shock the global economic system. This is what “developing” countries disrupt, and why I think that ultimately globalization and environmentalism(which I prioritize personally right now) are incompatible. This is a problematic conclusion because war is probably worse for the environment and globalism is the only thing that’s made this era so relatively peaceful.
I don’t think you can blame both sides, because on one side you have a relatively organized group of people with power who rule a country, and on the other you have a mass of people with 0 organization and no structural power whatsoever. Blaming the /populace/ of any country for a problem is a very slippery slope indeed.
I feel like on a place like this site where there are occasionally job postings and networking opportunities, there's an element of "my comments are my CV/resume" which biases some people towards defending/advocating for companies just in case it affects their career prospects.
Agreed, I see this often.
To be honest if the devil is far more powerful in a given situation, I find it distasteful to fight for him against the hurt party.
At least in the case of corporate defense, most of them are the "free market solves everything" types who believe that if completely left alone, business will turn the world into some utopia.
And there are also paid propagandists who attack Google at every opportunity, while ignoring far worse behavior by Google's competitors.
I'd really like to know what percentage of online content is authentic, and what percentage is paid astroturf to shape public opinion by business interests and ideological/political campaigns.
> while ignoring far worse behavior by Google's competitors.
That’s because whataboutism ia rarely an excuse, if ever. At best it could offer some context. But given the sheer scale of Google’s operations there's not much context to add. They have both the power to do everything on a massive scale: both the violations and the subsequent “opinion shaping”.
The only reason mentioning a competitor would be relevant in many Google related topics is as a distraction. I don’t have to care about the competitors to be 100% correct when attacking Google.
On the other hand I got downvoted into oblivion even when comparing Google to competitors with undisputed data. My subjective experience is that the pro-Google camp is doing more overtime on the interwebs. And it makes sense that they would be more organized since they are working under one umbrella with more or less one voice. I find it hard to believe that all anti-Google parties would manage to reach anything close to the same level of organization and alignment of goals.
I clearly said it’s my subjective experience. On top of that I just heard things the same way you did. Also some common sense.
Is your implication that ideological warriors can only exist on one side, against Google? Or that Google would resort to plenty of things that are borderline immoral or illegal but they draw the line at playing the PR card to make it look better? How about some of the thousands of employees that support the company line?
You have switched between “whatabout the other guys” and “Google never does this but all their ideological warrior opponents do”. The wording alone is already suspiciously aggressive, not to mention the naive implication that a company like Google would not resort to something all of its opponents are (apparently) doing.
Perhaps Google’s real strength is getting people to blindly support their crap. You see, it doesn’t actually have to be a “campaign” if all they did was train people to distract and constantly point the finger at others, pretend everybody else is bad but Google can’t be, and try to censor (via downvotes, reports, etc.) any internet opinion that doesn’t conform. Might sound weird but it’s more or less a literal summary of what you did in the previous 2 comments.
It doesn’t even require a ‘paid agent’ theory to explain. Large companies actively court allegiance from their employees and spend serious effort to retain it by, for example, relentlessly offering mandatory courses in the company’s Cultural Values and evaluating promotions through that lens.
So it’s not surprising that some BigTech employees are going to actively defend their employer’s values on forums.
The best way to understand modern tech company culture is to consider them all very successful cults.
I don't know if Google is different in this regard, but every tech company I've worked at has forbidden me from "actively defending my employer's values on forums". Maybe not in so many words, but at the very least there's a prohibition against commenting on the company's business on the internet. PR depends a lot on message discipline, which is not what you get from random employees jumping into these arguments.
Ive even created an account to answer this. It's true that most of these companies have cultural values trainings and a bunch of policies, but it's naive to assume that the hired top 1% will follow these policies or even take them seriously. They are smart enough to not get caught (as if anybody cares to catch them), but that's about it.
The today's employment model is a no strings attached contract: the employment can be terminated for no reason at any moment without a notice. This is a double edged sword: not only it gives the company flexibility to fire anyone they don't like, it makes employees mostly indifferent about the company as long as it cuts the paycheck. A typical software engineer does a 4 year gig and when the stock vesting plan ends, he moves on to another gig. There is no reason to take seriously policies of a company he worked at 10 years ago. And regarding evaluating promotions thru the cultural values lens. Yawn. There's no money in promotions.
For a cult to be successfull, its members need to be bound by something more serious than a company values training they watch once a year.
Remember when Reddit investigated the Boston Marathon bombing and discovered that it was Sunil Tripathi? And then Mr. Tripathi went missing and his body was discovered several weeks later?
False. He went missing a month before the bombings:
>Soon after, another redditor named Sunil as a plausible suspect after asserting a resemblance between the suspects in the FBI's pictures and Sunil, who had gone missing a month before the bombings.
I recall that this is exactly what Monsanto was caught doing about a year ago, to cover up the dangerous effects of the chemicals they used (iirc). Paid shills on the internet who had facts at the ready to disprove Monsanto's bad actions, and were willing to use 'conspiracy theorist' to shut up anyone who didn't believe them. And, the facts they used came from legit-looking research, which Monsanto was also exposed to have created fraudulently, by suppressing bad results and elevating 'good' results.
Nation states employ shills to attack other countries best companies like Google, Facebook, and Amazon. There has been a sustained PR campaign against our companies.
I've seen a huge number of accusations that people espousing my views are shills in some thread here or on reddit. I'm sure there are paid shills, and even in large numbers. I'd bet it's a problem. But we have to be reeeeeeeeallly careful about throwing that label around.
When you're wrong, you shutting down discourse by effectively saying the other person isn't even a person. This gets us to far more extreme us-vs-them dynamics than otherwise possible.
Because literally anything is a fact, when worded the right way. In fact, that's most of what is wrong with the state of our 'News' today. Even little word changes can be used to sway public opinion (ex: 'victim' vs 'drunk woman', in a case an assault victim happened to have been drinking).
I confess that I've also never really understood why people are so outraged by the hypothetical it likely presence of actual shills: it's not like the population of non-shill commenters doesn't already produce every stupid and inane take possible. Shills can shift the composition of votes and comments away from a given board's biases (towards their preferred biases), but what does that change from the perspective of you, the reader? Why would you engage differently with a terrible take from a paid shill and a terrible take from someone dumb enough to sincerely believe it? This seems especially the case on anonymous or pseudonymous fora, where you've already given up hope of leaning on known character as a prior (and IMO that's an excuse for skipping critical thought much of the time anyway).
I think you're overestimating how much people think critically about the news or discussions online. A couple upvoted comments here and there, paid for by a massive corporation, can absolutely influence opinion.
I don't want to sound like I think some "smart" people are somehow immune to this - I think everyone, regardless of their education or intelligence, is susceptible to this kind of group-think.
It seems almost definitional to me that you'd have to be pretty dumb to fall into this trap, but I'm probably being lazy with my terminology, and "bad at thinking" might be a better description than "dumb" (ie, critical thinking is a learnable skill, if you're smart enough).
It's still not clear to me that I see why shills are such a cause for outrage, at least at a personal level: if a person's critical thinking abilities are impaired enough of people that they'd have their views appreciably changed by shills' voting patterns, the absence of shills just means they'd be blindly following the whims of online mobs. I can see how this is better, but just _barely_.
OTOH, for those who are capable of basic critical thinking, shills don't seem to have much power: if they make good points, then good, if they don't, then no harm done.
I just don't see how why bad comments are somehow worse when they're driven by payments instead of simply the stupidity of a normal person.
Critical thinking is one thing, but that's not always or even usually the point of such campaigns. It can be as simple as 'bandwagon effect', which regardless of how silly it is to believe people behave that way, they absolutely do. Few people want to be on the unpopular side of an opinion.
> Few people want to be on the unpopular side of an opinion.
I can't relate to this impulse even a little bit, but believe that it's common. That being said, I don't see how its ubiquity makes it any less a failure of critical thinking ability.
The Internet is a pretty hostile environment for those whose epistemology depends on who's behind the pseudonym: it's not clear to me that shills are going to make these people much worse at thinking and processing information than they already are.
The same goes for those who do have appreciable critical thinking abilities: you shouldn't be credulously consuming the thoughts of any random pseudonym, and the same skeptical habits that inoculate you against dumb ideas from unpaid strangers work roughly as well on shills.
Yeah, I notice this every time there's a story about Facebook or Google, there's always a group of posts that are exclaiming how great their life is since they deleted their account.
> This is the lazy and ignorant method of dismissing other people's arguments.
First, please observe the principle of charity when posting.
> Shill or not, what does it matter if what they state are facts?
Because it is entirely possible to tell large lies using small facts.
Because propaganda and disinformation are actual, real things and are very difficult to combat once unleashed in the commons.
Because the "let the marketplace of ideas figure it out" excuse to not police propaganda and disinformation attacks is hopelessly naive at best, and deliberately ineffective at worst.
I'm John and I have views on privacy that are completely genuine and happen to align with what benefits the corporation(s) that my company, TotallyLegitimateComments LLC, contracts for. I believe advertisement is a force for good in the world and can bring together people and products that enrich their lives while creating value for shareholders! While I'd love to share what wonderful businesses my company works with, various privacy agreements prevent us from doing so. However, I can tell you that they all appreciate the fact that you find their views important! We will continue to lobby your congressmen on your behalf to ensure that these views are reflected in the nations laws! Thanks and remember, corporations are people like you and me!
It's like push polling... Poller: "Are you A) a reasonable person who agrees with my totally rational question or B) a psychopath who won't THINK OF THE CHILDREN?" Person taking poll: "Uh, A?"
Is it really a point of view, or is it a carefully engineered sequence of words designed to program people with certain opinions? There are lots of ways to "program" people that bypass their rational mind, and creating a false sense of social consensus is one of them.
Also, I noticed that some submissions unfavorable to Google seem to be flagged heavily (at least, they were posted more recently and had more upvotes, than higher-ranking submissions).
Of course, posting and flagging may not be intentional policy of Google, but also the work of individual Google employees.
It's gotten a little better over the years but still being critical of Google gets you downvoted and flagged quickly.
Also worth highlighting that many HN members work for Google, so they have less incentive to shine light on these issues, sometimes maybe even defend Google.
I would love it for companies to jump into these conversations and defend or explain what's going on. It's less interesting for them to speak through a secret proxy.
That said, I don't think there are nearly as many shills as some people think nor do I think they are very effective.
Not if it's disguised as a private individual (which is the default assumption in discussion forums such as this one).
If a company wants to present its views, they can do so via press releases or clearly mark their posts as such (or via verified accounts like on Twitter).
Honest question: under which law? Does anything actually prohibit companies from employing people from going on forums and shilling? I think it's perfectly fine for the third party platform to have a policy forbidding this, but the government enforcing something like this would not be ok.
I didn't answer the question with laws in mind, as the parent wasn't exactly asking a legal question, but what I would guess would be more a moral one.
IINAL, but since we are at the topic of laws, in the last years influencers (at least in Germany) had to fear hefty fines if they didn't label sponsored posts as such, as that counts for covert advertising. As astroturfing could be construed as advertising, that might fall under the same law.
I think if you want the point of view of a corporation you should feel free to go read their official blog, press releases, etc. They have plenty of ways to spread their point of view that aren't people pretending to be individual commenters.
A corporation is a collection of people. A corporation doesn't have fingers nor mouth, and can't type nor speak. It is always a (series of) paid individual(s) who types and speaks "on behalf of" it.
Note also that "on behalf of" can only exist in response to some prior meeting (or deal) with certain members (board or employees) of the corporation who wish to push "its" policy.
If they want their view heard, they're welcome to share it like everyone else.
Paying someone else to share a very sanitized, market researched version of your opinion while pretending that they're organically sharing their own opinion isn't that. It's dishonest, and it's eroding honest conversations for the sake of advertising.
What's more likely: that Google's willing to risk being outed for employing shills or that there are people who actually hold the opposing viewpoint?
Personally I tend to think of most people on HN as privacy nuts. Honestly of all the problems in the world to get worked up over, online advertising is pretty low on my list. There's nothing I can do about Google/FB/whatever, so why even think about it.
If Google or whoever wants to pay me to keep posting, I'm listening.
>that Google's willing to risk being outed for employing shills
This risk might be lower than you think, the key phrase is, "we didn't realize that the marketing firm we contracted was subcontracting to other, shadier marketing firms, who themselves aren't evil, but fell in with the wrong crowd and ended up trusting a yet deeper level of marketing firm, who was in fact evil."
> There's nothing I can do about Google/FB/whatever, so why even think about it.
Actually, of all the stuff going on in the world, this is the one thing where you have an actual degree of control that matters.
Sure, one individual won't change business policies. But: you can easily take steps to prevent or reduce the amount of data these companies have on you.
Don't use their services, use an ad blocker, use tracking preventing, etc. You can even easily tailor it to your specific needs.
Compared to stuff like Brexit, fires in the Amazon, Hong Kong, ethnic cleansing in Myanmar, attacks on democracy in Turkey, Hungary etc., it is trivial to do something against Google/FB that has at least the effect of protecting you.
The threat that Google's tracking infrastructure represents is not limited to ads. If they stayed in their proverbial lane and used their panopticon exclusively to sell me things, many of us wouldn't mind as much as we do.
But they lend the power and reach of their machine to governments, whether it's China looking to smoke out dissidents, or the NSA PRISM program, or even the DEA itching to score another weed possession arrest, Google gladly and happily does whatever governments ask of them, frequently with no warrant required.
Exactly. I don't care about getting marketed at (an ad blocker is the simple solution there). What I worry about is the collection and storage of all that data, to be used/sold for x purpose by y organisation in perpetuity.
Yeah, it's so annoying. WHAT PERSONAL DATA? The entire article is basically a rant about "personal data can be leaked" without any examples. Names? Browser Habits? What is it?
> Please don't comment on whether someone read an article. "Did you even read the article? It mentions that" can be shortened to "The article mentions that."
As listed elsewhere in the thread[0], this isn't something Google is trying to keep under wraps or even trying to obfuscate, so "uncovers" makes it seem like the Brave developer found their secret scheme to undermine GDPR. In reality, the program has been active for a long time (previously called Doubleclick Ad Exchange) and the article is only a call to attention for how this system works, while helpfully recommending the Brave browser.
I would argue that the whole reason for Brave's existence is BAT. It's their business model and I'm surprised that HN normally hates cryptocurrencies with a passion but gives Brave a pass.
First, HN isn't some hive mind that takes stances as a collective. Second, cryptocurrencies are a technology and can be used for good or bad. To hate them is like hating linked lists. Third, there is plenty of (undeserved) hate for Brave here on HN. Lastly, "the whole reason for their existence" is a subjective judgement. As long as they remain open source and don't force me to use their cryptocurrency, I am completely fine with it.
BAT is traded in all crypto exchanges. you can instantly exchange it for a stablecoin or even write a script to exchange it to USD every day in e.g. coinbase. I don't see why the fuss
What is sad is that the EU commission doesn't take real action against Google. At best we are to expect a slap in the hand, at worst, the investigations will drag on and nothing will happen.
Google has been fined a 5 billion dollar fine already last year, that claim simply isn't true. But I agree with the implicit demand, they clearly haven't gotten the message. The EU should slap them with billion dollar fines again until they learn their lesson.
Sadly mid-level employees would probably be the ones going to jail. The tech lead and/or PM sign off on compliance. I used to joke that one of my responsibilities was to put my freedom at stake.
None of my products violate GDPR in any way I could conceive, but I’d hate to be handed down a mandate to do so. Both not accepting a project and not signing off on it are pretty bold moves.
Even though it's not optimal, maybe that's what it takes? Then top-tier engineers would think twice about working for companies with shady privacy practices and Google would finally have an incentive to better themselves.
Common retort: don’t just downvote me, tell me why I’m wrong. I’m a professional who had signed off on legal documents. Why would the buck not stop at me? That’s the way it works in architecture and other professions.
I _think_ this rule is intended to discourage "Have an upvote" (and similar) comments, commonly seen on Reddit.
Reminding fellow readers/commenters to not just shallowly dismiss comments once in a while doesn't do much harm, and actually lead me to reply to the parent.
Yep. Privacy violations and leaks should be punished with fines and jail time. Period. End of sentence.
Just like execs have to personally sign for and are accountable for their financial statements, they should do the same thing with privacy. GDPR sets a very common leveling of the field so it's completely fair now.
"Privacy violation" is a huge, gigantic category of all kinds of things. One kind of privacy violation is completely different from another; some privacy violations are completely harmless, and some are actually directly harmful, and some in between.
Giving jail time for any of these is like giving jail time for any kind of "offensive behavior". Maybe someone just didn't like what someone said and called it offensive, or maybe someone physically attacked someone else. You don't get jail time just because someone claimed offense, you have to prove harm, and fit the punishment to the crime.
This is why I can't take privacy advocates seriously. Their effort to fight for all privacy undermines the attempt to prevent real harm from specific kinds of information being exposed.
How about systematic, deliberate, deceptive privacy violations in order to increase profits? That seems like a pretty distinct category from the cases you are concerned with.
I'll take your word for it, the evidence doesn't look clear cut to me,
> privacy violations
Again, who cares if it was just your shoe size? We should not send someone to jail for leaking who your favorite pop star is. Did it, or could it, do harm? This is a nearly universal standard used to assess how someone is punished according to the law.
> in order to increase profits
Of course it's to increase profits, you think they're doing it for fun? Did we stop living in a capitalist economy and nobody told me?
The point is simple, though: they cannot do it to increase profits. Doing it for profit makes it more jarring than, say, collecting extraneous personal information through an error.
Your shoe size point is simply a strawman. You've chosen one arbitrary data point in order to make the argument look less important. In any case, I'm of the opinion that neither Google nor the governments of the world should be allowed to do this kind of large scale surveillance and profiling.
And finally,
> We should not send someone to jail for leaking who your favorite pop star is.
I agree with this completely. However if it's not just my favourite pop star, but it also contains all the articles I've read in the last two weeks, and my age, and what I've recently bought... All of these neat little data points about me, neatly filed in a profile made just for me, then the natural question that arises is: Why do you even have this? Who allowed you to start building this profile on me and on thousands of others? The systematicity and scale of it is hard to argue against.
Whoever cares, cares, and it's none of your business. Noone is suggesting that you should not be allowed to allow companies to collect your shoe size and sell that information, so you are mostly just missing the point.
There are people who do not want to allow that, and there is absolutely no reason why we should force them to allow it. Claiming that what you stole is of little value does not make your theft legal. It is simply not up to you to decide what has sufficient value to keep for other people. If you take something that is someone else's property without them transferring property rights to you first, that is theft, and it is completely irrelevant whether you think that they shouldn't value what you took.
> Did we stop living in a capitalist economy and nobody told me?
If anything, you seem to have a very confused understanding of capitalism. The one core idea of capitalism is strong individual property rights, because that is the basis for decentralized price discovery. Capitalism could not possibly work if the state simply declared that property rights for low-valued goods (based on a valuation decided by the state, presumably) a not enforceable, or if you could simply steal and use your competitor's machinery to produce goods as long as you didn't harm them (like, returned them repaired before their next production run, maybe?).
The fact that someone could make money by violating your property rights certainly never was a justification that allowed them to avoid punishment in a capitalist system.
> Again, who cares if it was just your shoe size? We should not send someone to jail for leaking who your favorite pop star is. Did it, or could it, do harm?
All personal information makes fraud and blackmail easier. Even the seemingly mundane.
I won’t go into details because the relevant law already exists.
This brings me to my second point:
> Of course it's to increase profits, you think they're doing it for fun? Did we stop living in a capitalist economy and nobody told me?
Even in a capitalist society, doing illegal things for money is generally considered worse than doing illegal things without the expectation of getting paid.
> "Privacy violation" is a huge, gigantic category of all kinds of things.
This discussion is about intentional, systematic, large-scale violations of privacy by large corpotations. You pointing out (essentially) that sometimes your one neighbour tells your other neighbour how they heard this rumour about you is simply an attempt to derail the conversation, not a relevant argument.
> You don't get jail time just because someone claimed offense, you have to prove harm, and fit the punishment to the crime.
That's just nonsense. Talking away people's stuff without their consent is a crime. Noone has to prove any harm for you to be punished for theft. If the theft also harmed someone, they obviously can demand that you compensate them for their loss, but that is a completely separate issue.
Yes, the punishment has to fit the crime. And if you were the head of a multi-billion dollar corporation that made money by organizing the stealing of low-value goods from every person on the planet, that punishment presumably would be quite something in order to fit the crime.
I specifically mentioned GDPR not something nebulous or not well defined. It's a level field. It appears Google has violated the GDPR, willfully even. I have no problem with execs going to jail over that.
What about for creating and keeping up a global surveillance-capitalism system where billions of people are tracked, profiles, stripped of their privacy and manipulated, thus disrupting democracy and civic society?
You are ignoring that the government can and does request data from the companies. You may say 'who cares about the adidas I bought', but it doesn't stop there... google knows where you are (location services in android...), who you contact, what sites you are registered to (contatct synced / gmail), in addition of what your interests are(are you gay? involved in casual sex? voting republican or democrat? Planning a protest?).
This level of data collection is wrong regardless of who does it: if a company has this data the government can get it too by definition, after thr data is there it's just a slippery slope.
"(funny how privacy advocates are more upset about Amazon knowing your secrets than the government, who have an actual incentive to use it against you)"
Actually going to disagree with this one. AFAIK, the government has not used their collected information against me at any point in my life as far as I can tell. However, these tech companies are using it literally every day to try and manipulate my decision making.
5 billion is peanuts to google. if the fine is low enough that it can be written off as a cost of business, it's too low. effective fines devastate profit margins rather than merely crimping them.
Doing some quick googling, it looks like their quarterly revenue is in the 6-7 billion range. I don't believe 20%ish of revenue is in the peanuts range.
Edit: doing some more googling, maybe it's in the 10 billion range; getting better for Google, but I still can't see a company deciding that such a fine is irrelevant.
That's also the point. Profit shouldn't have anything to do with fines. Redress and compensation should. A fine is supposed to make business-as-usual unpalatable.
I was curious what the maximum fine for Google could be under GDPR.
Google's 2018 revenue was $136.22 billion. GDPR allows for a maximum fine of 4% of global turnover.
At a maximum 4%, GDPR could impose up to a $5.5 billion fine. I agree, getting something closer to the maximum might help the situation. Although ultimately, my bet is that Google is already baking the 4% revenue risk into their business model. If that's the case, GDPR's maximum penalty should be reconsidered.
It doesn't mean anything. This 'trpc' account is the resident Brave hater; every HN story about Brave features trpc making stuff up or just impugning Brave as a scam.
>Brave is certainly the most shameless tech company I've ever seen in my entire life.
without any supporting comparisons or an explanation of why.
When you say
>Shame on HN to repeatedly allow such a scummy company exploits its ranking algorithm again and again.
are you talking about people posting about Brave on HN and exploiting up-votes for it to gain attention? If this is the case again it is just a statement without any proof.
That's just your bias and no grounds for violating it. In fact, if it was true the more reason to comply. But it is obviously false, seeing Google can still do whatever they wish without serious repercussions.
Also, curious to see how an ad network can guarantee that its creatives don’t misbehave, this is brand safety played on “godlike” (sorry for bad Unreal reference).
Yeah, carbon seems to be pretty good. I used to see it on codepen and some other design website. Always relevant, but carbon seems limited to developers and designers.
Some other ad thing which doesn't need tracking: Sponsored posts or targeting people who follow someone on Instagram.
One big problem is that ad fraud is rampant. Knowing a person is really seeing the ad is very important, which tracking helps you confirm to a certain degree.
I am so glad that this violation is being exposed. Well done Brave!
I invite any developer / blogger to check out https://CodeFund.io. We are a non-tracking 100% open source ethical ad platform that focuses on funding open source.
It's ok to occasionally link to one's own site in relevant contexts, but only as a small part of using HN as intended—i.e. for gratifying intellectual curiosity: https://news.ycombinator.com/newsguidelines.html
But it looks like you've been using HN primarily to do this. That crosses into spamming, so please don't.
Yeah. Brave is desperately trying to compete with Chrome and can’t figure out why they aren’t making much headway, despite having better privacy (hint: users just don’t care about privacy, except for the HN crowd).
These complaints filed by Brave, with their own employees posing as professional “victims” intentionally grasping at straws for evidence of privacy violations, smack of desperation. This is not the first claim they have filed, and sadly, it won’t be the last. Their claims thus far have been disingenuous at best, downright dishonest at worst.
GDPR was not meant as a weapon with which one could hobble their far more successful competitors. It’s sad to see that a company that claims to care so much about privacy is undermining GDPR by bringing the worst fears of those that opposed it to life.
(hint: users just don’t care about privacy, except for the HN crowd).
I'm not sure that's true: uBlock Origin has twice as many installations as Brave does. You might say "oh, but that's just blocking ads!" But if you don't block ads, privacy problems are going to spring out of the woodwork like nobody's business. That is, they might not care about privacy by name, but they certainly care about it in effect.
I’d say the vast majority of uBlock users care about user experience. The current ad experience sucks. Most local newspaper sites, for example, are unusable because of ads. But if it still preserved their privacy behind the scenes and didn’t significantly improve their experience, the install base on uBlock and other ad blockers would be near 0.
And I'd say the opposite. My grandfather just wanted ads gone; user experience isn't even on his radar. I've seen the thesis you've presented here before, and while it sounds plausible, I don't think it's as cut and dried as it seems.
People really do hate ads. We're inundated with them, constantly. Low grade psychological assault, at all times. I don't blame people for wanting a respite.
Actually, you’re saying precisely the same thing I was. When I say “user experience,” I am talking about the ads being gone. No ads = better user experience.
Google has been very naughty in the past few years. It does look like they removed the “don’t” be evil motto for a clear reason. And people thought that Investment Banks were bad.
Below is the google_gid for different publishers, there is no proof of overlap, they have different google_gid for same person. Which is exactly what google describes. [1]
I don't understand what Brave claims.
[1] https://developers.google.com/authorized-buyers/rtb/cookie-g...