Hacker News new | past | comments | ask | show | jobs | submit login
Losing the war against surveillance capitalism letting Big Tech frame the debate (salon.com)
301 points by jrepinc on June 21, 2020 | hide | past | favorite | 186 comments



The proposed solution — banning the purchase or sale of personal information — doesn’t go nearly far enough. It would just cement the existing monopolies on personal information.

Here is a better framework:

- Make it illegal to collect or store personal information on anyone you don’t have an active business relationship with.

- All personal data collection should require an opt in. The definition of opt in is that it takes at least one more manual step to opt in than opt out, and that the options must be presented with equal prominence.

- Selling or purchasing services based on third parties personal information counts as selling or buying the information in the eye of the law.

- Buying and selling personal information would be illegal unless the person explicitly authorizes the transfer each time the data is transferred.

- Civil penalties for breaking the law would range between 1-10% of global income per transaction, with a cap of 20% of annual gross income per month. (So one mistake doesn’t completely destroy the company, but ignoring the law does). Class action waivers or binding arbitration would be disallowed. (Unless the person whose information was sold could choose the arbitration agency after the fact.)

- Authorizing the transfer of information must not be a requirement to receive some other service, etc.

- A central registry of all opt ins would be available so individuals could audit their opt ins, and with enough information to assign blame for fraudulent opt-ins (and to protect buyers/sellers when one side of the transaction falsified an opt in). This would also have a single page to people to cancel all of their existing opt ins.

That would start to solve the problem.


I have a somewhat similar solution: Personal information should be automatically "copyrighted" by the owner of the information.

Not literally, but the legal framework would be very similar to copyright. Any third party that wants to use private information for any purpose must get explicit consent from the data subject, for example, by paying them for a year-long limited license to use the personal information for certain purposes only.

There could be certain privacy rights that it would be illegal to contract away, so perhaps it is not legal for a company to sell personal data to third parties -- if they want the data, they have to contract with the data subject themselves. This framework would support "data co-ops" or "data unions" that collectively represent groups of data subjects and bargain on their behalf.

There are some complications and drawbacks to my proposal, for example, lots of private information involves multiple parties and they would have to somehow negotiate. But something along these lines would really help put the power back on the other foot.



Man you gotta provide some context for stuff like this. Posting random links is a bad idea -- could be a virus link, or a rick-role, or just a poorly cited source.

Luckly, it's a decent article and pertinent to the discussion, so you'll get an upvote... but for the people reading the thread:

> REGULATING DATA AS PROPERTY: A NEW CONSTRUCT FOR MOVING FORWARD

> JEFFREY RITTER AND ANNA MAYER


Thanks for the link!


I wrote this in my profile 3 years ago:

I think (2017-02-22) that user data should be treated in much the same way as nuclear raw materials. I would suggest considering a User Data Non-Proliferation Treaty along the lines of https://en.m.wikipedia.org/wiki/Treaty_on_the_Non-Proliferat...


Under your proposed framework would encyclopedias need permission from the subjects of their biographical entries in order to include those entries?


If the information is already publicly known, maybe that's ok?


It's mostly the act of publishing information somewhere that makes it publicly known. If you haven't personally published it, it's personal until someone else publishes it.


That's a fair point.

In places where there are reasonable privacy protections, there is a distinction made between regular people and public figures. Perhaps public figures get weaker protections under this proposal as well?


"Make it illegal to collect or store personal information on anyone you don't have a business relationship with."

Let's assume we are going to go ahead with this idea. (FWIW, I think targeting collection and storage it is the right starting point.)

Would it make more sense to make it a criminal offence, a civil one or both? The word "illegal" usually refers to criminal, not civil law. However the comment refers to "civil penalties". Should we think about who is going to enforce the law, who will pay the cost of that enforcement and how unauthorised collection and storage may be discovered. Who enforces criminal law? Who enforces civil law?


We call our attempt at this a Declaration of Digital Human Rights: https://declarationofdigitalhumanrights.org. Any feedback is welcome.


I agree whole heartedly. I have been saying for years that it’s time for a privacy bill or rights in the US. It needs to be anchored to individual rights or it’s toothless.


As a counterpoint to this, many businesses cannot exist without maintaining CRM database of who their customers are, what acquisition channel they came from, how they have responded to new products / promos / pricing / page designs, direct customer survey data, what payment methods they use, what help center issues they’ve had, what behavioral clickstream of events they used to navigate a product / store / web page, and how all these things vary by time, by seasonality, by gender, by age, by location, by income bracket, and more.

Even if the company has zero intention of selling that data anywhere else, it’s literally a bare minimum requirement to be able to not go bankrupt and layoff all their employees from failing to competition or failing to correctly measure and respond to intricate market signals.

I am a deeply privacy conscious person, but it does annoy me when the counter response to data brokerage or selling personal data is to go way too extreme in the other direction and vilify or try to make illegal the huge amount of data capture that is completely, non-optionally required at a basic minimum level for any modern business to have any hope of not failing.


Businesses have existed for a very long time without databases full of this information. Especially to the extent you’ve detailed.


Firstly I think you are actually wrong in your assertion. Businesses have been at the forefront of computational techniques for accounting, inventory and customer tracking for hundreds of years.

More importantly though, less technological tracking of customers in the past totally does not matter at all.

What matters is if your competitors have access to these tools. This is a reality of modern business so unless a data law perfectly eliminates this possible data capture for all businesses, any business that does capture that data will have huge advantages over competitors.


To be able to do that we first need progress in the meta-problem of too-concentrated power. Social media is a good start – it's letting the people unionize against the state. I think, the people have far more power than they think.


It's not just the extraction of data, it's also the optimisation methods:

- Make it a legal requirement to display on the website what optimisation techniques are used and what metrics are optimised.

- Create recommendation systems/search engines/web browsers as a public good, with utilities that are more humane and as such reduce the effects of addictive and harmful software in the medium term.

- Legislate on psychoactive software with the same fiduciary responsibility and oversight as applied to doctors and surgeons.


That sums it up perfectly. I'm sure that Ron Wyden would agree. It would be great for this position to get some momentum.


The GDPR does many things on your list but it’s sadly hardly enforced.


I think it has just had a slow start. I think enforcement will come.


I hope you're right, but I'm worried you're wrong.

It seems like a particularly difficult law to enforce, and the unusually high penalties give these companies an unusually strong motivation to be as aggressively unaccommodating as possible, since every enforcement action is an existential threat.

These companies are already big enough that normal legal proceedings against them are difficult... GDPR proceedings are going to be impossible.


This made me go look for a list of past successful enforcements, and it turns out one exists! So it gives me hope for the future of the GDPR https://www.enforcementtracker.com/


Honestly I think the main problem with the privacy debate is that most people have no imagination on how their data can be used against them and the response you often get, even from people with years of experience building software, is the now classic response : "I have nothing to hide".


I could spend hours trying to explain to people why data harvesting is a bad thing, and most of them still wouldn’t care.


Add top 5 reasons here. I'm one of the people without imagination.

Genuinely curious.


1. If an insurance company finds out that you’re more predisposed to a disease, they will charge you or your employer a higher rate

2. The Fed gov’t has ~20,000 laws, it has more than it can count, can you tell me that you haven’t broken a single one? Could a creative prosecutor find something you’re guilty of if they had enough information on you?

3. In 2012 (edit: 2010) target sent ads to a teen girl for baby supplies, the father angrily talked to the store manager asking why they were sending baby supply ads to his daughter. When he found out his daughter was pregnant, he apologized to the store manager. Target had a good idea the girl was pregnant because of the items she bought in their store. How would you like it if corporations know details about your family before you do? (Pro tip: they already know things you don’t know)

4. Organizations are not monolithic, they’re full of people who are constantly moving around and organizational priorities change. Any restrictions on use and protections will be ignored eventually and changed in privacy policies, which they have complete discretion to change.

5. The world is filled with clever and unscrupulous people, they will never stop finding ways to use your data in ways we can’t imagine, not all of them will be in your favor.


I think the problem with your argument is that most people don't think those things would apply to them, or they wouldn't care if they did. And in their defense, at least in the the US, most of the examples you give are hypotheticals. Possible rejoinders:

1. In the US, the Genetic Information Nondiscrimination Act already prohibits this. For other things, like "are you a skydiver", the insurance company already has the right to ask you that.

2. I mean, sure, I'm sure I've broken some laws, but I can't think of any examples of overly agressive prosecutions for people who weren't already guilty of bad shit.

3. I mean, people are already used to creepy hypertargeted Facebook ads, most of them have don't really care anymore.

4. I mean sure, but how does this affect me?

5. This just sounds like a hand-wavy slippery slope argument.

I think the issue for most people in the US is that they still fundamentally believe that the justice system is generally "just", at least with respect to themselves, though that is obviously changing. Contrast that with Germany, where people are vividly aware of what happens when the government uses personal data nefariously.


Your response to point 2 made me gasp. Just watch a few episodes of any of the many true crime false conviction series on Netflix. Unscrupulous prosecutors are not rare, and it does not matter who you are—if you get caught in the crosshairs you could go to prison for a crime you didn’t commit, or be coerced to provide false witness against a friend or loved one for a crime they didn’t commit.


Netflix is definitely a reliable and objective source for crime data.


Fact is stranger than fiction.


I watched that series and if we were in a ubiquitous surveillance state few if any of those people would have been convicted. They would have had video/phone tracking to verify their alibi and/or video of the crime showing they didn't do it.


That's making a big assumption that everyone has equal access to surveillance data and that most if not all players are fair-mindedly searching for the truth.

If you assume the other way, pervasive surveillance makes a frame-up easier.


I live in a country where the accused has equal access to the evidence.

And your rejoinder doesn't make sense. If you were being railroaded for a crime video showing that you didn't do it cant hurt you.


In many US jurisdictions, the decision over what police body cam footage to release is left to the police themselves. For example:

https://www.denverpost.com/2019/11/18/body-camera-aurora-pol...

> Police Chief Nick Metz said he wants to release body camera footage from the August death of Elijah McClain to lend more transparency to the investigation. But three months later, police still have it — advised by the district attorney’s office to keep it in house.


In many US jurisdictions, the decision over what police body cam footage to release is left to the police themselves

In the UK as well, there have been several cases where the police have chosen only to disclose “relevant” evidence, when the existence of additional evidence becomes known the case collapses.


I said "the accused", not everyone. If you are charged with a crime and if that footage could be evidence then it is required that the state turn it over to the accused.


Only evidence produced in court. Body cameras can "shut off" or "break", the full footage can be destroyed, etc.


> I live in a country where the accused has equal access to the evidence.

Really? Where? Because in most countries with laws requiring that, police and prosecutors regularly ignore said laws with impunity.


For many people in the U.S., overzealous prosecutors are their allies so long as the people they are prosecuting who are "already guilty of bad shit" belong to undesirable demographics. Prosecutors and their selective enforcement efforts are central to suppressing minority votes through felony disenfranchisement, for instance in Florida.

I wonder how the arguments against surveillance capitalism poll across targeted minorities vs. the general population. I speculate that sentiments diverge in the same direction as sentiments towards the police.


I'm sure there are bad prosecutors, but Netflix is hardly a source to look into something like this. They tend to make very compelling, but obviously biased, documentaries.

But I agree with your sentiment otherwise.



This is a great example, though, of why most people won't care. Most people will look at that and think "Swartz hacked in to a system, stole data, and distributed it for free. Sure, the prosecution may have been harsh, but I don't go around hacking into systems."

To be clear, I don't agree with that, or any of the 5 points I put above. But I still think it's important to emphasize that 90+ percent of people will look at that and similar cases and think "that situation doesn't apply to me."


> 1. In the US, the Genetic Information Nondiscrimination Act already prohibits this.

That only applies to genetic data. There are several ways for your insurer to find out everything wrong with you, especially when many medical practices sell your data. It does not cover any other type of insurance, other than health insurance. Genetic information can be used to discriminate in disability insurance, long-term care insurance, life insurance etc.


>I think the issue for most people in the US is that they still fundamentally believe that the justice system is generally "just", at least with respect to themselves, though that is obviously changing.

I think you highlighted a bigger core issue: people think that the past was somehow better and what they're seeing in their time is unique. The justice system isn't changing, it's been like this for at least a century, probably longer. America hasn't just elected its first rabble rousing demagogue for a president and now the country is going to shit, this has happened before. Multiple times.

This is worrisome to me because it causes emotional overreactions to situations that aren't really that unprecedented. We elected Donald Trump, and obviously we are special so we have to make sure this doesn't happen again. The punchline is that we are ignorant of the past and are unaware that the republic has carried on just fine after previously awful presidents, and without that knowledge, we may make a number of bad decisions that negatively affect the future of this country.


>1. If an insurance company finds out that you’re more predisposed to a disease, they will charge you or your employer a higher rate

I've never really understood this one. Do you want to force insurance companies to spread the costs associated with insuring high-risk individuals across their whole clientele via this information opacity? Why not just use a single-payer system instead of that Rube Goldberg machine at this point?


I don't think that Target thing happened in 2012, iirc I first heard that story in 2010.


People would just say "I don't think that would happen" to all of those things. They are not tangible enough. If anything, they would laugh at 3 like they would to any "funny" story.


>If an insurance company finds out that you’re more predisposed to a disease, they will charge you or your employer a higher rate

That make sense, do you really expect insurance company to not charge higher premium to insure someone who predisposed to a disease ?

>The Fed gov’t has ~20,000 laws, it has more than it can count, can you tell me that you haven’t broken a single one? Could a creative prosecutor find something you’re guilty of if they had enough information on you?

Then the issue is with too many laws

>How would you like it if corporations know details about your family before you do?

Doesn't matter

>The world is filled with clever and unscrupulous people, they will never stop finding ways to use your data in ways we can’t imagine, not all of them will be in your favor.

This will be the always the case, regardless of surveillance or not.


Thank you for writing that. I'll ponder them.


Honestly I wish health insurance companies would be more willing to use data for billing. Someone who works to stay fit should be rewarded vs a fat individual or someone doing extreme sports. I know some insurance pays for health monitoring watches and gives bonuses, but mine still doesn't.

I don't support insurance billing more for predisposition to disease, but that's already covered under the law so your point 1 is kind of irrelevant.


> Someone who works to stay fit should be rewarded Rewards to incentivize healthy behavior seems like a good idea.

This study about the impact of an incentive program for preventive care concluded: "Voluntary participation in a patient incentive program was associated with a significantly higher likelihood of receiving preventive care, though receipt of preventive care among those in the program was still lower than ideal."

https://www.ajmc.com/journals/issue/2014/2014-vol20-n6/impac...

> predisposition to disease What about pre-existing conditions?


People who die earlier from poor lifestyles and habits incur a smaller lifetime cost for healthcare, because the vast majority of health expenses occur only when a person lives long enough to become elderly and die from natural causes.

The idea of insurers being willing to pass the savings, if they existed, down to their customers in any significant way is cute, though.


>Someone who works to stay fit should be rewarded vs a fat individual or someone doing extreme sports.

What about those who work to stay fit but are still fat according to their BMI? Not everyone gets dealt a good genetic hand.


>2. The Fed gov’t has ~20,000 laws, it has more than it can count, can you tell me that you haven’t broken a single one? Could a creative prosecutor find something you’re guilty of if they had enough information on you?

people should read "the trial" and internalize it (and ponder what historical context prompted kafka to write it).


1. Isn't this solving the wrong problem? Rather than hiding this information from the insurance company, we can make - and have made - it impermissible for them to use the information as the basis of setting rates.

2. Again, solving the wrong problem. Also, coming to the attention of a prosecutor with a sufficient vendetta to go to various companies to try to dig up dirt seems rather far-fetched. And if they couldn't use personal data for that purpose, wouldn't they find another avenue?

3. If I had kids, I'd want them to be protected - it certainly seems reasonable that children's personal data be protected. Once they become adults, though, I don't see why a parent should have any control over their adult children's behavior.

4. Firstly, you're assuming that your interlocutor shares your concerns about the uses to which that data could be put. Second, data goes stale, meaning it ceases to be valuable or potentially harmful.

5. And the world is also filled with clever and scrupulous people who want to use personal data in ways we can't imagine to everyone's benefit. Shutting down collection and use of personal data prevents those beneficial uses as well as any harmful ones (indeed more so).


It's all about understanding behavior. Humans, all humans, anywhere in the world are creatures of habit and routine ... Once you understand that you can start to exploit it. And it doesn't matter if you're doing it now, the data they're collecting is there forever, as ML algorithms are getting better your understanding of the data is getting better.

Let's take a concrete example : the cambridge analytica case ... Let's say for the sake of argument I've determined that blue collar workers between the age of 30 and 40 that are in major cities have a 40 to 60 percent chance of voting Republican now if take all of those that have been to a church once or have a close relative that have and bombard them with ads showing how the democrat candidate is supporting abortion for example (wether that's true or not) now the Republican vote have a much better odds. Of course I'm oversimplifying but this is not science fiction this was done with a very good efficiency in different countries and it's just one example ...


Your concrete example is nothing more than voters getting the information most pertinent to them. That's not a bad thing.


you're kinda proving my point ... but let me ask you this : are you ok with mass manipulation and profiling ?


Umm that's only some of the information that pertinent. It might also be pertinent that the other candidate cheated on his wife with a male prostitute and used campaign find to pay for an aide's abortion, but I'm pretty sure they wouldn't put that in the ad as well.

Incomplete or biased information can be worse than no information...


Social standards are changing all the time. Maybe there's something you're blind to that'll be considered really offensive by 2040 standards. I know my views on what's offensive have changed over the last twenty years. Even something like digging up and releasing an off-color joke can have serious consequences. It's easy to say "don't ever be a dick by today's standards," but it's much harder to say "don't ever be a dick by tomorrow's standards." Why risk it?


I will help you out.

Let me tell you a story. During communism, my grandmother openly disparaged our great and beloved leader Janos Kadar (https://en.wikipedia.org/wiki/J%C3%A1nos_K%C3%A1d%C3%A1r) in our town. She was called in to Budapest, to a police station where it was known that tortures as punishments to political enemies happen. She was asked if she really said X/Y/Z about Kadar. She went on a loud rant about how Kadar is [insert disparaging comments here]. The policeman went pale and ordered the crazy women to be taken away and sent back to her town before anyone even hears such heresy - to their judgement, the woman was obviously crazy to show such defiance in that political climate.

If you think that kind of society won't ever happen during the next X thousand years while the elite have unlimited power to surveil, and if it does we will ever have any chance of breaking out of it - well, I would like to advise you to never go near casinos.

Edit: corrected genders, no genders in my language.


> If you think that kind of society won't ever happen during the next X thousand years

Its happening today...


"It's 10 o'clock. Do you know where your children are?

... Google knows."


Off the top of my head the risk of identity theft and black mail are probably biggest reasons why privacy is important. This is especially concerning when you realize how "secure" data often leaks and becomes public.

https://en.wikipedia.org/wiki/List_of_data_breaches


The main assumptions you need to be wary of data harvesting are:

1. (Some) people will act against your best interest for their own gain (most often to get money, but, sometimes, to undermine people or ideas they do not like)

2. There are always things that you believe/stand for (pro-abortion, pro-BLM, anti-vaccine, anti-pesticide, pro-house garden, etc.) that someone somewhere disagrees with. These are increasingly identifiable and actionable on.

3. There are always traits (skin color, medical disposition, height, IQ, etc.) that are identifiable and therefore actionable on.

4. Modern data collection is much less visible and obvious than it's ever been. This means the people doing this collection and the people acting on this data are subject to less societal and legal pushback than before, and receive this pushback much later along the line (meaning they can build more momentum behind whatever they're doing before getting called out, if they ever are)

5. Data is never secure indefinitely. Eventually, there's a good chance it's sold to the highest bidder, regardless of who that bidder is or what their intentions are.

With the above assumptions, we can see people are more empowered to discriminate against and take advantage of others. They are also more easily able to remove civil liberties without getting called out.

I think some people believe that the people who would do this kind of thing are few and far between, but it only takes one person with weird ideas and some clout getting this kind of capability to make life miserable for a group of people. And people probably underestimate the number of people with low empathy in the population.

Here is an extreme but real world example of what can happen: https://en.wikipedia.org/wiki/Dehomag


Just study history, information is power.

Data harvesting and processing technologies were what enabled the efficiency of the holocaust.

https://en.m.wikipedia.org/wiki/IBM_and_the_Holocaust


Related: The Netherlands used to included religion in its national records. This allowed the Nazis to have a comprehensive list of all Jews.


cambridge analytica: surveillance enables psychological manipulation which really hurts democracy


Yea, it’s not just about privacy. Many companies will charge you more money or refuse service if they find out specific information about you.


The entire concept that paying money will make companies not sell your data even harder is magical thinking. What better signal does an advertiser want than somebody that spends money, especially on something relatively frivolous?


Absolutely true. Although it’s true that users have become the product through advertising, making users pay won’t make them any less susceptible. The magical thinking this idea is based on is that market forces will solve the problem. The theory is that if I, a user, find out that a company is selling my data I will leave the service and go to another one, voting with my wallet and causing the whole system to value privacy. This depends on 3 assumptions:

1. There is symmetry of information in the economy (users will know when their data is being monetized)

2. There are many providers of a service with different offerings

3. The services are commoditized (there is nothing keeping me from interchanging one for the other)

None of those assumptions are true for all services and only a few services may meet all 3. #1 is the least true IMO and if you want it, it’s something you’ll have to pass legislation and regulate for. There’s not much you can do for #2 except make it easier for people to make services, sometimes economies of scale and network effects mean that it will only make economic sense for there to be 1 player in a market, in that case there should be heavy oversight on that player, such is the case with public utilities. #3 would require standardization, this can be done by industry, but they have every incentive at this point to silo their users. Public and Gov’t pressure could make this happen similar to the formation of the MPAA as a hedge the movie industry made against gov’t regulation.


i think there is a sneaky #4. just because some service you're using meets your privacy criteria today, there is really nothing stopping them or their aquirer from deciding to sell or exploit your information in the future. TOS seems to be kind of a one way street.


That's not true though; the fact that some service I'm using meets my privacy criteria today means they don't have any information to sell or exploit in the future.


How many services you use have your email address, telephone number, real name, or street address?

All of these have value, but are needed for various aspects of business (sending purchases, receipts, contacting you on case of issues with your account, etc).

For that matter the compiled list of actions you take on a site have value. The only way not to provide a service with valuable personal information is not to use it.


In the magic world where capitalism's "vote with your wallet" actually works (which as vegetablepotpie points out is fictional), email addresses given to corporations have the form "[random base64 UUID]@generic-mail-service.example.com", phone numbers are similar but base ten, businesses don't distinguish between 'real' names and "John Q Smith", and mailing addresses (given to corporations) are of the form "[random base64 UUID] care of [US postal service or a more cooperative competitor thereof]".

(I guess this technically assumes that voting with your actual vote also works, at least for the purposes of removing abusive know-your-customer laws, but lack of gratuitously harmful regulation seems in line with the libertarian-esque philosophy behind "vote with your wallet", so... eh.)


Who said anything about paying more? This could all be changed with legislation at effectively zero cost to consumers. That seems like the only real option as I can’t directly stop my electric company etc from selling my information.


Good point, the need for a legislative solution is a lot more clear cut in the case of government-sanctioned monopolies such as electricity.


To be honest, my guess is that the consumers who would pay extra money for privacy, although likely in high income brackets, are also likely to be the consumers who are the biggest pain in the ass to support (if we're not talking enterprise).


This is where it would be fun to have all the data and start creeping the person out. “Have fun next week on your vacation don’t worry I will drive by your house a couple times while no one is there and check on your place. I know you just bought an expensive computer and were also searching best home security systems so I imagine you don’t have one yet but don’t worry I will watch your place even though you never asked me”. Or “Hey I hear you might have Erectile Dysfunction you okay bro”. Oh you don’t care if the big companies know everything about you but your own friends it’s a secret? What if your friend works for the big company does he get to access all your information? Obviously that seems wrong. Maybe that would get the point across.


Yes, it is very innapropriate, but the example is more than effective.


I agree very inappropriate and something I would never attempt. Facebook has been doing a very good job of predicting what ads are relevant to me to the point where I am somewhat convinced Facebook can hear my conversations. I don't have the app but access facebook through my webbrowser so not sure if that is even possible but it creeps me out to think what these big companies know about us and all the while many of the people I Know are seemingly clueless or careless to the situation.


It’s simple, privacy is strong encryption or no information. Can’t sell nothing, strongly encrypted data is hard to break so also not worth much.

Any argument against strong encryption is an argument looking to preserve wealth or power by using someone else’s data.

Educating the public about data abuse and how they can protect themselves should also be mandatory education in the world we live in today.

Anything less than this can be abused and is likely a bullshit solution.


> I have nothing to hide

Do they live in glass houses?


That argument has sadly never worked. It's too easy to dismiss as an exaggerated analogy. I also think it's wrong to assume everyone outside tech is a clueless sheep. People are often well aware of the implications but think it's "worth" it. They simply don't care and draw the line somewhere between giving their data to Big Tech and live in a glass house.


So much this. The common individual doesn't understand the power that data gives to marketing and propaganda machines. Moreover, the ilusion of services being "free" gives these platforms an insane competitive advantage.


I don't like this elitist view inside tech that people are just dumb and unaware. No matter how hard I try to explain the implications to people they easily brush it off. People just don't care.


There is nothing elitist about it. I am also not knowledgeable of medicine or law, and I ask friends or family for advice. However, trying to convince people that “free” is not free is a very hard argument to have. I have used this video in the past to help my case, even if it caricaturizases the effects:

https://youtu.be/5pFX2P7JLwA


The article mentions the key problems:

- once private information is leaked, it can never be undone - being treated differently by some party because that party had knowledge of private information about you is difficult to prove

The ultimate solution is to legally require that important services cannot depend on private information.

An example would be: define a set of properties health insurance cost is allowed to depend on, and then have health insurance providers publish their formula for the premium (that can only depend on that set of properties).


> : define a set of properties health insurance cost is allowed to depend on, and then have health insurance providers publish their formula for the premium (that can only depend on that set of properties).

That's exactly how it works in Italy

Parameters used to establish premiums must be approved and you can't make them in a way that target a small group or a single person


This is likely to increase insurance costs on average. So it’s a tradeoff.

It also has a big issue of adverse selection. Say everyone with some known “private bad health markers” signs up for insurance but the insurance can’t take into account. That’s not going to be great. So we need to be careful to allow enough to be taken into account.


this might sound cynical but we have lost this war already in the last century[1]. It's a salami tactic which is applied against us slice by slice and the pandemic (and every protest whether in the Paris banlieus, HK or Seattle) is another successful milestone won by the state to integrate any outliers and test new methods. Since fighting back by going to a protest is viewed as radicalism there are increasingly less (legal) options out of this mess. When you go and join a a protest make sure to cover yourself[2], and look out for the cow[3]

[1] Jacques Ellul - The Technological Society ("La Technique") https://archive.org/details/JacquesEllulTheTechnologicalSoci...

[2] https://crimethinc.com/2014/08/14/staying-safe-in-the-street...

[3] https://www.fotomuseum.ch/en/explore/still-searching/article...


I don't understand how this meme of "Corporations sell your data" came about.

Google has perhaps the world's largest trove of information about people, but do you see it ever leave their iron grip? No. Same with Facebook. In fact, with Google, you can pay them money (GSuite) to get enhanced privacy, and a legal relationship which ensures your data is protected.

Add laws that prevent your information from getting leaked / hacked. If these companies have an organization-ending law when data gets leaked, they'll finally have proper incentives to secure your data adequately. Until then all of these laws seem like a waste of time.

On the other hand, in order to prevent intentional transfer, I believe that if we ban the transfer of raw information (not aggregated) between organizations this point is somewhat moot -- of course, we would have to have something like antitrust law enforced and prevent Google from acquiring an insurance company or such.

I trust Google with my data, as a large leak would likely hurt their reputation significantly, and result in them losing their edge (For a little while at least). I do not trust the US Federal Government to secure my data, or use it in a good way. Treat the US Federal Government like any of these organizations -- if the FBI wants to get your logs, they should be held to the same legal framework as Google "selling" data to any other corporation.


I think this is a fair point. ish.

It is true that google/fb don’t sell your data so much as the ad-tracking derived from it. Though they do buy your data from other sources which incentives other firms to collect and sell your data. While you may (rightfully) trust Google’s security practices, I wonder if you would trust the practices of all the other parties who collect or buy your data.

I think it’s also worth looking at the framing of debates on privacy: large firms like google want it to mean that your data is kept as a secret between you and google and you can control who (else) may access it; the other side of the debate want it to mean that you need not live in a world that seems to know so much about things you have done but not made publicly available.

I think I disagree with you somewhat about the distinction between google collecting data and some state like the USA collecting data. The civil service must follow strict rules about such data collection (which private companies need and indeed do not follow), and it is full of scrupulous public servants who want to act responsibly and morally, and who have laws to protect their abilities to do so. The argument that state data collection is worse because of the potential use it would have in enabling tyranny is reasonable. But I think one must concede that firms such as google are able to inflict a lot of tyranny upon people (indeed they currently do this though only to few people and it seems mostly for arbitrary or overlooked reasons).


In another light, Google and Facebook have a duopoly over customer relationships. Imagine for a moment you are a bodega owner. You know the names and faces of frequent customers, their usual order, and have personal rapport. Now imagine you are a digital publisher. Readers come to your site from Google and Facebook, and often read your content directly on these platforms. You know very little about your readers and pay Facebook and Google to target ads to your readers; you leave it up to G/F to decide who / how to target.


Many privacy regulations Only benefit the big five tech companies (Linkedin charges $10 for one inmail). Any new regulation should exclusively address the big players first before pushing more regulation to startups and small biz. They are already struggling. If you dont do that we all endup with 4/5 companies. I would say oligopolies are bigger problem than privacy currently. We need more diversity in tech systems. YoY the diversity is going down if you take the important IT systems people use into account.


>Many privacy regulations Only benefit the big five tech companies (Linkedin charges $10 for one inmail)

I don't understand this part, are you implying LinkedIn's InMail price is due to regulation?


Yes. Think of zoominfo’s new ipo. They are big and can work around the regulations (ex: CAN SPAM) of selling user information. Also according to CCPA, if you process 50,000 user records in a year (just 150 accounts per day) you will come under the purview. But big companies like facebook (https://www.cpomagazine.com/data-protection/facebook-refuses...) will work around them.


We have diversity https://tosdr.org/ (search "fingerprinting")

Without regulations there is no advantage.


I mean to say - if you take market share into account we have less diversity in important systems people use.


And I mean to say - we have diversity in applications that does not transfer in market share. We can use DDG but Google Search is not required to change.

I understand your concern and don't know how to fix it. Click to consent does not distinguish terms. Would it help if there was a requirement to display rating like "PG-13" or Energy Star?


To me it looks like cameras everywhere has been a good thing. Looking at the current Police Brutality videos coming out has rightfully caused people to want to stop that happening. I feel a lot safer in areas with security cameras. In the building I live in some items have been stolen which cameras helped find the thief, also some crazy lawsuit was shut down because we had a video of the guy falling over deliberately.


I'd feel a lot safer if the feeds from all those security cameras could not be abused -- like inaccurate face recognition that is shared by Google's doorbells with local police departments, like state driver license photo databases sold to any buyer willing to pay that may then be used for any purpose, like the terribly inaccurate face recognition rate of dark faces, and worst of all, the inability of citizens to defend ourselves against non-standardized uncalibrated technology sold to and abused by careless law enforcement agencies by for-profit tech providers.

No. So long as law enforcement can't be bothered to check their 'facts' and unscrupulous makers of camera-based tech remain unaccountable for disabilities inherent in their products, I will remain convinced that all use of cameras and face recognition are knowingly abusive and irresponsible. As such, they should be outlawed until the people have MUCH more oversight of our overseers and their enablers.


The examples you give are of someone getting caught doing something, not from having been prevented doing so. Is the right conclusion then to feel safer?


Of course. The people caught and punished for those crimes are not likely to repeat it at that location.


And how well-supported is your belief that they would otherwise?


Do you also think innocent people being harassed by police because some AI-driven surveillance system says they're a criminal is a good thing?


Its not good thing, but the solution is to improve/fix the AI-driven surveillance system, not eliminate it.


that's not the solution, that's how you enforce something that is broken (similar technocratic positions led to DNA testing being no longer questioned in a court). The actual solution is to ban it completely.


Not perfect = broken? Well, I disagreed


I agree it's not perfect if you're white. It's utterly broken if you're a PoC.


Big brother is a good thing?


I assume people think the answer to private corporations “framing the debate” is for government to pass laws. The same government that is trying to enforce backdoors to encryption and is already using facial recognition to harass people who “fit the description”.


Exactly. The worst privacy transgressions historically have always come from governments. Corporations aren’t the ones we should be afraid of.

You want the people that brought you the Patriot Act in charge of privacy?


That gov is also buying the data. Snowden swept under the rug. The so-called Patriot Act up for renewal. The gov is going to take the position that best preserves its own interests. This is the pattern some of us keep seeing; the rest simply can't be bothered.


to repeat my mantra, it's not either-or, it's all of the above. corporations, governments, and institutions of all kinds are especially designed to consolidate power.

but society is better when power is dispersed widely, so that it's difficult (ideally impossible) to infringe individual liberties and civil rights without recourse. we the polity need to continually pit institutions against each other to maintain our freedoms. there's no "done, now let's move on" in that.


The difference is that any power you give the government is automatically amplified. The government has the power of the state, to wield power.

Private corporations have only one goal - to make money. Government is controlled by ideologues. Also because of the nature of both the Senate with two senators per state and the electoral college, the less populous states have far more voting power than their population demands. Their belief system and worldview is completely different than mine - yes I live in one of those states.

It’s not just about the difference on policy issues. It’s about fundamental human rights issues like police power and treatment of minorities, and how many of them believe fundamentally that none straight/non Christian people are an abomination and laws should be passed that discriminate against them.


> "The difference is that any power you give the government is automatically amplified."

that's not immediately obvious. power is divided in government, and ultimately rests collectively in the people. you'd expect the power expressed is exactly the accumulation of the tiny amounts each of us delegate to government. it's not perfect, but all the little divisions and mechanisms of check and balance are there specifically to prevent undue amplification.

certainly the government has coercive power and we must be vigilant, but make no mistake, corporations are power structures that collaborate with and coerce the government into offense against the less powerful (minorities, women, etc.). who but the moneyed owners do you think try to distort and amplify their voice through government? it's not the average person that citizen's united benefits.

the indirection of the corporate form is designed specifically to make it difficult to parse out their influence (and mistakenly place trust in them). they're designed specifically to insulate owners, to the benefit of no one else (governments be damned where they can get away with it).

that there are assholes everywhere, including every corporation, who would discriminate against trivial shit is exactly why you want dispersed power across any structure or institution.


> that's not immediately obvious. power is divided in government, and ultimately rests collectively in the people. you'd expect the power expressed is exactly the accumulation of the tiny amounts each of us delegate to government. it's not perfect, but all the little divisions and mechanisms of check and balance are there specifically to prevent undue amplification.

Two issues:

Power doesn’t rest in the hands of “the people”. It rests in the hands of the Electoral College - which doesn’t represent the people. As evidenced by the different outcome of the popular vote and the electoral vote. It rests in the hands of the Senate which also doesn’t represent “the people” it represents “the states” regardless of the population. The Senate also puts unelected judges on the bench with lifetime appointments and heads of government committees like the FTC, FDA, etc. that write regulations and are also unelected.

The President - that doesn’t represent “the people” because of the electoral college also has undue power over the Justice Department, FBI and CIA - none of whom are elected.

Even if we did have a government that represented “the majority”, that hasn’t worked out well if you aren’t part of the majority.

Coercive power allowed Jim Crow, making homosexual sex (sodomy) a criminal offense and making interracial marriage (miscegenation) a criminal offense.


yes, those are valid criticisms of the distortions wrought on our representative democracy by elitism. people with money and power wanted that layer of indirection (at least since the 1700s) because they feared that the people would actually get what they want, which could include adverse actions against the elite. it was an unfortunate but unavoidable compromise at the time. that, and true direct democracy was practically impossible until the past few decades for a polity the size of a country.

to be clear, i'm not arguing that governments are bastions of fairness and enligthenment, just that they're not much different from other power structures in that any institution (and especially the confluence of those institutions) is a threat to individual liberties and civil rights. however, governments are the primary mechanism of protecting and extending rights and liberties.

the cause of discriminatory injustices like jim crow is not simply the representative form, but rooted in the people themselves--the collective us--as expressed by our differential power and influence. the wealthy simply have outsized influence over all policy, for no discernably valid reason. why should we ever give people who are particularly greedy the power to largely control social policy too?


I would much rather trust people who are motivated by greed than ideology. People who are motivated by greed don’t take rights away from people because of their religion, race, or sexual preference as do people who are motivated by concerns of an invisible being reining hellfire and damnation on the country if they allow “race mixing” and “sodomy”.

I have been in the tech industry for 25 years and never once did I worry that I wouldn’t get a job I was qualified for because of the color of my skin. I’ve interviewed successful at everything from small companies with less than 100 people as “adult supervision” (my current company) to one of the “FAANGs” as a customer facing “cloud consultant” (the company I start at in less than a week).

The tech industry has enabled everyone to have a camera in their pocket to record video and a platform to spread the video to keep police accountable. The government consistently tries to pass laws to make that illegal. It has allowed people to organize against governments in the US via encrypted channels. The government wants a backdoor. We already saw what the government does when it can intercept communications - look no further than what the FBI did during the civil rights movement.

Right now, the President is trying to “shut down Twitter”.

But still in 2020 when my son walks down the street in our neighborhood in the burbs, he’s looked at suspiciously by the police. So who should I trust more the government or the tech companies?


> "I would much rather trust people who are motivated by greed than ideology."

it's ironic that this statement comes off so ideological. why so adamantly either-or?

i guess it's the old upton sinclair saw about it being difficult to get a man to understand something when his (future) salary depends upon his not understanding it?

no one is saying have a lovefest with the government, that they're your good friends. understand and expect more, yes, blindly trust, no. but the same goes for corporations. tech company good guys vs. evil oppressive government is not only simplistic, but simply wrong.

it's not like racist political donors, ardent law-and-order police supporters, and corporate managers and owners are wholly separate groups of people. moreover, greed isn't some singular and exclusive vice of otherwise saintly corporations controlled by completely egalitarian and nondiscriminatory managers and owners. corporations not only condone discrimination and violence, they're inextricably complicit through direct and indirect influence.

people who believe and do shitty things are spread through all institutions, not just some of them. and the way they effect those beliefs is unlikely to look the same in each institution, especially when they're actively trying to hide their actions.


Racist corporations don’t have “coercive” power - including local police with military grade weapons to shoot unarmed people in the back and choke then for nine minutes.

The government already tried to file charges against an activist for “inciting riots” by speaking out in public. Yet and still the President can say the same thing on a public platform and be protected because corporations fear the government.

There was a story just recently where a Facebook user posted Trump’s words verbatim and the poster was banned for inciting riots.

We also have the case of Tim Cook basically kissing the President’s golden ring and standing beside him for photo op while the President was lying about what Apple was actually doing with regards to manufacturing in the US.


i think the difference is that you see the government as a coherent body acting in unison under the direction of the president, and i see a bunch of thugs using both the government and corporations (and other institutions) as tools to oppress in many different and loosely coordinated ways.

i actually don't think trump is an overt racist per se, only a highly self-centered opportunist who advances the white supremecist agenda because he couldn't care less about anyone but himself. an equal opportunity disregard, if you will. not a defense though, he's more suited to prison than office.


Its not just the President and it’s not just Republicans. It’s also Democrats like Clinton who got “tough on crime” and instituted policies that disproportionately affected minorities and naive Black politicians[1] who were dumb enough to think that the government was the answer to crime that was hurting their communities and that police would be used for something more than “border patrol” to keep minorities out of places where “they didn’t belong”.

[1] For context, before I get downvoted to oblivion and flagged for being a “racist”, please read my previous comments. I am Black.


yes, democrats have been complicit, sometimes overtly, sometimes inadvertently. the idealogy around policing and use of force transcends party, institutions, and even race itself. policing needs to be completely reformed and its funding largely funneled to community building and support rather than use of force against minorities.

there was a great story recently on latino usa[0] about josé tomás canales, a texas representative from 100 years ago who fought against the racist impunity of the texas rangers on the texas border. the lawyers for the rangers, knowing that the national media was trained on them in a time with little national media, magnified the threat and danger of the border and mexicans to turn the country toward support of the racist use force by the rangers. a hundred years later, we still curry in that bullshit.

[0] https://www.npr.org/2020/06/16/878316528/the-lone-legislator


Your argument carries an assumption that all possible government action is hopelessly broken. I'm generally libertarian myself, but this viewpoint is ultimately a coping mechanism for a societal death spiral. Why bother working for results, when the results are always bad?

But even accepting the assumption that the government has become a malevolent attacker, then there is still an argument for goading it into hampering data collection - every bit of data collected by private companies is also available to the government to abuse and oppress us! Government power and corporate power are not in opposition, but rather two sides of the same coin of disempowering individuals.


It’s not just broken - it’s discriminatory.


Letting the enemy set the terms of and frame the debate so we're starting on the low ground seems to be par for the course for folks on the left end of the spectrum. Pick your issue: climate change, policing, war on terror, lgbt rights, socialized medicine, on and on. You start to wonder if "our side" is throwing the game.


Agreed, mostly. Throw the game? I'm not sure they even know where the game is, and what's being played. Too often they're at the wrong venue suited up to play checkers. That's no way to win a chess match.

To your point, so often this happens that there's only two possibilities:

- incompetence / negligence - it's intentional

Either way, we lose.


Many people are proposing alternative solutions to this problem in this thread so here is my attempt:

I see the driving force of the data collection as the desire to make more money selling ads which is done by constantly trying to come up with new kinds of ads to sell and with trying to sell ads as better targeted (or offering better metrics on whether they worked or not) and therefore worth more money. I propose 1. a way to make increased targeting less valuable and 2. a way to make this data collection offer something slightly closer to the privacy people had before the internet:

1. An entropy based tax on selling advertisements. The tax should be proportional to minus the logarithm of the proportion of the ad-seller’s audience whom the ad is expected to reach. (Plus some requirement of auditing these numbers or updating them if they turn out to be wrong). This would mean that eg newspapers would not have a high tax because their ads go out to all their readers (but if they sold ad-slots which were region-specific then the tax would be higher), but some firm uploading a (small) list of emails they collected to google or fb would have to pay a higher tax as google/fb have a lot of users not on that list.

2. Behavioural data may not be used or sold when it is more than 30 days old. It may be used (in a non-ad-targeting) capacity after this time when it is part of the official records of the company that has it (so eg your bank can keep your transaction history and let you query it). Same goes for inferences or other derivatives from that behavioural data. This would hopefully add some forgetfulness back into the system.


When Mark Zuckerberg calls for stronger regulations around privacy, what he's doing is calling for the government to solidify his monopoly status. By exerting very high compliance costs on younger more agile competitors, he ensures that only Facebook and a select group of deep-pocketed competitors can compete. And that ensures his dominance for even longer.

When a billionaire openly calls for regulation, you should be extremely suspicious. Regulatory capture is often the intent and goal.


Do we need more young agile privacy invading competitors?

Facebook is on decline, add regulations and let in slowly dye.


My (admittedly) conspiracy theory fear is that a regulated facebook is a facebook tightly embedded in government ("It's ok to require a facebook account to sign up for xyz public service, it's regulated by the government!").


Facebook the company is not in decline. It’s still doing great. Allowing it to buy WhatsApp and Instagram seems to have been a terrible idea.


The easy answer? Make some regulations depend on scale. We already do for much of legislation.


That just means a bunch of fly-by-night startups that have even less controls around data protection than Facebook and Google would proliferate.


i'd suggest making punishments scale with size (and as a result, enforcement), rather than regulations contingent on scale. it's conceptually and regulatorially easier, and more fair.


While that may be true, that does not mean that the answer is no/weaker regulations.


The answer is scalable punishment. Punish companies based on the number of users they end up violating privacy of instead of a big constant fine. If facebook violates privacy of 2 billion users, they should get punished separately for each of them.


I have offered to loan my copy of The Age of Surveillance Capitalism by Shoshana Zuboff to friends and family, and no one has taken me up on it. The most I have been able to do is to get a few of them to use Firefox containers.

I consider this a major problem that needs to be fixed, like the division in US society between left and right, income inequality, and corporate control of both political parties.

Shoshana Zuboff does a brilliant job of documenting the bad effects of collecting all digital information in our lives and the use of that data in ways that are against our interests.

I use Google and Facebook, but only to buy things from them, like Oculus Quest, Play books, music, and movies.

I don’t mind paying for services, and I don’t mind them using data on what I have directly purchased from them. I will fight all other forms of data collection.


I don't think I understand your reasoning. Your friends and family enjoy free Google search, Google Maps, Facebook, etc and are ambivalent to privacy concerns. You, personally, don't mind paying for services or avoiding them entirely. I assume by fixed, you mean the government should pass a law that reflects your position on the matter, even though by your own account it is the minority position.


Yes, you understood me correctly: I would like my country to implement European Union type privacy laws.

I am not in a minority position in my family and friends circle, rather, I am the only one willing to put effort into it. Everyone says that they would like privacy protections, but the tech requirements for protecting themselves is more effort than they are willing to take. If we could poll the population of the EU, I would expect the same: they want privacy but don't want to personally jump through tech hoops to get it.


I also recommend Dragnet Nation.

https://www.amazon.com/Dragnet-Nation-Security-Relentless-Su...

I read Chaos Monkeys around the same time. It's not a deep dive on privacy but it gives you a great sense of FB's priorities, culture, and so on.

https://www.amazon.com/Chaos-Monkeys-Obscene-Fortune-Failure...


Chaos Monkeys is a highly skewed take from someone that barely worked at FB. Its primary objective is not to inform with facts.


True. None the less, the section about tracking people across devices is factual.


To be fair, Zuboff's writing is painfully dense academic Theory. I'm very interested in the subject and the book, but unable to read a full chapter in bed without falling asleep.


I fully agree with you. I did finish the book, but it took forever. Some of my friends did not finish the book, but instead read shorter versions by the author, for example: https://www.nytimes.com/2020/01/24/opinion/sunday/surveillan...


Canada banned masks at protests during the Harper years. When I complained, everyone said that “you should have to protest publicly, take a stand,” or, “there is nothing to protest.” It was very frustrating


I think the real fight should be limiting the surveillance abilities of governments vs. the surveillance abilities of corps. Already today cell phones are the ultimate surveillance tool that nobody in the media talks about restricting constantly, unlike large tech companies. I think it's because big tech is a competitor that is eating big media's lunch.

If google knows what you do, they will sell you more ads. If governments know what you do, they will throw you in prison for many bigoted reasons.


This is simply a problem that cannot be addressed by law because law can be broken.

Strong encryption can be broken but it is much harder to break than law or policy.

Want privacy? Assume everything you post online is public unless it is encrypted. If you don’t want that information public don’t post it or use said service.


I don't really understand how forbidding Facebook and Google selling and buying personal information will prevent any other bad actor collecting and using it.

Even naked man example used in this article doesn't make sense in this context.

The solution is not legal, but rather technological.


All of the technological solutions have failed (it is easier destroy privacy than protect it), so the only remaining solutions are legal.

Eliminating the legitimate market for personal information would help a lot.

Sure, there’d still be black hats spying on people, but that’s very different than having most of the biggest companies on earth doing it openly.


While people can and do act illegally, its more difficult to base some of the largest companies on earth on a root of completely illegal activity.

Facebook and Google have completely unrivalable empires of data that hundreds of thousands of employees work every day to perfect and extract conclusions from, scale matters here, and its a lot harder to scale up a completely illegal enterprise.


Whatever constraints on personal data we have for tech companies, can we have at least equally restrictive constraints on the government? The nice thing about google is they can’t put me in jail for doing something they don’t like.


It's kind of hypocritical that salon.com closes this article if click 'I decline' when they ask to surveil me.


The war hasn't started yet. We are 10 years away from tor being mainstream at that point the war begins.


I don't like it but I think the cat is out of the bag, there's no effective way to roll back the machine.

From that POV the issue is control and communication, who has access to the firehose?


"I want the (free) benefits without any of the downsides", this reads to me. Particularly as it gets into condemning "neoliberalism" in response to being reminded that Google Maps is not compulsory.


I’ve never liked this argument. The Google ecosystem is more or less required these days unless you are a little more tech savvy than the average person, and your job/education doesn’t require it, AND you have the patience to stitch together a bunch of (maybe) trustworthy services.

It’s like saying, “if you don’t like your ISP, then find another one.” Then when they’re all doing the same crap going, “well then don’t use the Internet.”


I don't really buy this. if your school/employer uses google products, then yeah, you're SOL. but otherwise you can get pretty far using apple products and duckduckgo. I guess if you count sending emails to people with gmail accounts as "using the google ecosystem", it's pretty much unavoidable.


Wait till you find out about school forcing surveillance tech from the big companies. You don't have a choice whether you are at home or in school.

During the lockdown, schools decided to force kids to install spyware and malware on their own devices. Literal hijacking of everything you do on the device and then pushed a COC on what you can and cannot do on that device.

In developed countries or rich places, you might afford privacy but free services or traps offered by big companies see widespread use in developing countries. Schooling situation is worse and they are profit seekers who will sacrifice privacy for saving up few $$$.

Privacy is not something to be bought, it's a right.

To give you one example, google analytics and ads. An average person might be able to block that on desktop by simply installing a plugin but on phone, there is no way to install a plugin for your browser. Solutions like [0] blokada are not well known. Most people don't know how to configure a proxy, change dns and they won't install a different browser. And then there are apps which are way worse than websites. There is a reason why companies push you so hard to use the app. Reddit, Imgur, twitter, instagram, facebook etc. Why app?

Also, windows, google captcha, amp, search, youtube etc. You can't escape them.

0] https://blokada.org/index.html


> there is no way to install a plugin for your browser

Firefox for Android blocks trackers by default and supports installing extensions.


Yes, kiwi browser and yandex does too despite being chromium. I just mean the majority of browsers that normal users will have. If someone is using Firefox, they are already changing their defaults and are most likely aware of the privacy issues.


> if your school/employer uses google products...you’re SOL

Yes. That is literally what I said. That applies to millions of people.


So many websites require Google scripts to operate, even if you personally use alternatives whenever possible you are stuck in their ecosystem.


> Maps is not compulsory

Great, show me the button on my car that disables its cell modem and location tracking features.

Presumably, it is also legal to remove the license plates, since those are being scanned, located and dumped into commercial databases.

Many parts of the US don’t even have sidewalks or bike accessible roads (and Uber/Lyft track you), so there’s no reliable privacy-preserving alternative.


One of my thoughts is in theory credit agencies are heavily regulated. That needs to be extended to any business that aggregates data on people as it's primary business. And regulators should be able to tell them no. As in, no you can't collect that information. And no that individual can't work for you. You need signed authorization to share that data. Each time.


Google Maps may not be compulsory, but people may not be aware of what they're engaging in by using those services.


Nor are they even remotely aware of they are using(& consenting to) Goog api's & analytics baked into their .gov sites.


But Google Location Services is installed on almost every single Android handset. Do you have any clue what this thing does? It inserts itself as the location service provider for the entire device and it absolutely uploads your "anonymized" location information, along with detailed sensor traces, to Google multiple times per hour.

How can you audit or control this data? You can't. How can you audit that software that processes this data? You can't. It's the worst kind of closed source.

That's just mobile. Google and its ad partners track you everywhere you go on the internet.


Why name the problem "surveillance capitalism" instead of just surveillance?

Do fewer problems happen when a government does it? Does it matter if the government is more capitalist, socialist, etc?


The term was mainly introduced by Shoshana Zuboff in her book "The Age of Surveillance Capitalism" [0]. She was describing the way in which organisations harvest personal data to fabricate prediction products than can be traded.

She isn't saying "if you aren't paying for a product then you are the product". She is saying that our clicks are the raw material from which Surveillance Capitalists create a product they can sell. E.g. Facebook use our interactions and meta data to build a rich social graph, and when an advertiser wants a list of people who meet certain criteria, Facebook can then provide them with a list.

This is different to the reasons that governments typically conduct surveillance.

[0] https://en.wikipedia.org/wiki/The_Age_of_Surveillance_Capita...


It's not that fewer problems happen, it's that different problems happen. The 'classic' problem of government surveillance is that the governments are able (and too often willing) to resort to force.

A distinct problem of surveillance capitalism is that someone in the business of selling surveillance is incentivized to invent new markets to sell their product to and new reasons for their customers to buy. Furthermore, if they're successful, that gives everyone else an incentive to copy them.

Inside a dystopian totalitarian government, a ministry of surveillance might push to increase its own influence, but it would never tolerate competitors.


Surveillance is a camera on the wall; surveillance capitalism is a privately owned camera on the wall selling information of every passerby.


> selling information of every passerby

Or products based on that information, e.g. a list of people who appear relevant to an ad campaign.


Surveillance capitalism is a superset of regular government surveillance. Most governments have access to the databases the capitalists gathered, even though it would be illegal for the government to create such a database.

“Universal surveillance” is a correct, but less precise, description of the current situation.


It is actually quite easy to make Big Tech implode from within. Declare a moratorium on issuing green cards to all big tech employees for the next 25 years. Sweeten the deal further: expedited green card processing for employees who come forward as whistleblowers.


It's much easier than that even, make a law that no one can use personal information for commercial reasons any longer.

All that tracking based web economy nonsense collapses overnight and you don't need to wait 25 years.


Or Google and Facebook turn off their sites with an explanatory message, starting worldwide riots and revolutions.


If you really think anyone is going to start riots and revolutions because Facebook and google cant stalk them online, I think you have a skewed perspective.


I said Facebook and Google would turn off their websites - they have zero obligation to keep it running - not that people would like them to breach their privacy and start rioting for that.

I think you don't realize how many people absolutely depend on social media, how many companies are built on income allowed by advertising, etc. Even music events are organized through Facebook around me!

And still, all Facebook and Google needs to do is remind the politicians who gets their advertising done.


> zero obligation

Except to those pesky shareholders. Which includes employees. Peoples' wealth is tied up in those stocks.

But regardless... maybe try email or something. The internet existed before Google and Facebook. Plenty of competitors would rise to the top in the extremely unlikely event Google and Facebook shut down.


You don't need to tell me, I am on HN with you. It's the people out of HN that I am talking about, and they often have no idea Facebook is in fact not the internet. I am not kidding - I used to be a computer teacher for adults (as in "what is mouse?"). Facebook is trying to make it seem so really hard. BTW people today often don't even have email or know how to use it correctly - you can use your phone number for FB/Instagram/Whatsapp, so why bother.

> Except to those pesky shareholders. Which includes employees. Peoples' wealth is tied up in those stocks.

That stock is dead anyways if they forbid ad targeting. And no, the company does not have any such obligation to its shareholders - it only has to make profit for its shareholders as best as it can, which may very well include turning the site off for a few days (the public/gov does not need to know for how long it is turned off and/or whether indefinitely).


> That stock is dead anyways if they forbid ad targeting

No, advertising can exist very easily without targeting. It's still advertising, and people still pay for it.

> And no, the company does not have any such obligation to its shareholders

You're ignoring that some of the people with the most stock are those in charge of the company. They'd be destroying their own wealth by "shutting down the site".


Well, it worked out for Google News, so why not for ad targeting?

https://edition.cnn.com/2019/09/25/tech/google-france-copyri...

> No, advertising can exist very easily without targeting. It's still advertising, and people still pay for it.

Have you tried advertising your product on such platforms? It just does not work if you're not a huge brand name with a deep pocket.

> You're ignoring that some of the people with the most stock are those in charge of the company. They'd be destroying their own wealth by "shutting down the site".

Actually that's the basis for my argument. That's why they might want to go "on a strike" for a few days - if it succeeds, it would massively increase their profits compared to the non-ad targeting version of the company, while missing only a few days of revenue if not successful.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: