I’m still not understanding the controversy and kickback around these regulations. The hysteria around GDPR has been silly.
My home country has a Privacy Act, it’s terribly weak and has little enforcement but the principals in it were drilled into me from the very start of my career. Customer data is really important. It must be kept private and not shared. And we should only have what we need to have for whatever we’re doing for them. I don’t know why those principals are so hard for people to swallow. Now in the states when I hear mobile carriers are selling my location data without my consent, and so on, I get completely irate. It boggles my mind why people think that kind of behavior should be allowed.
> Because Facebook and Google have been guided by the principal: if it's not illegal, it's ethical.
The problem with that philosophy is that if you've normalized any questionable behavior on the basis that it is legal, the natural response will be stronger legislation. The "unstated rules" around privacy and people's personal information have been broken.
Any ire at forthcoming regulation should be directed at the bad actors, like Facebook and Google.
> Any ire at forthcoming regulation should be directed at the bad actors, like Facebook and Google.
Agreed, but the buyers of said information should be scrutinized too. If they are willing to pay money for that information, what are they using it for?
A Saas company that does nothing fishy with users data, may still be concerned with the harsh, company destroying penalties associated with GDPR. Indeed there have been dozens of discussions on HN about this. It may be FUD, but it’s real.
The other people are doing terrible things.
Please don’t assert the people in the first group are actually in the second.
When it’s a largely opaque foreign regulator deciding where on the scale your fine will be I’d argue it’s irresponsible to consider anything less than the maximum when doing cost-benefit analysis.
As I understand it, the law was passed because the representatives of the people of europe saw a misalignment between "normal" and "good" and decided to enforce it.
>I’m still not understanding the controversy and kickback around these regulations. The hysteria around GDPR has been silly.
Because it puts a whole bunch of people out of a job and ruins the user experience for many things. Every news site I go to I get greeted by a pop up that won't go away until I interact with it. Apparently similar things are happening with phone apps as well. And while this is happening I've read about people having their app revenue drop 90% in some cases.
We're not even a full week in yet, so we don't know what the full effects of this are going to be.
>It boggles my mind why people think that kind of behavior should be allowed.
It's not about whether data abuse is okay or not, but rather it's about the consequences of taking such a heavy-handed approach as GDPR did.
> Because it puts a whole bunch of people out of a job
Yup, that's the reason behind the hysteria.
> and ruins the user experience for many things.
Let's put the blame where it belongs. It's not GDPR that is ruining the experience. It's the sites that refuse to take a hint and stop abusing users' data. When you prepare this popup that will "ruin the user experience", you list a bunch of things in it. All of those are the things you should strongly consider stopping doing.
> And while this is happening I've read about people having their app revenue drop 90% in some cases.
Honestly, I was strongly hoping this would happen. Good to hear the new law is working.
> It's not about whether data abuse is okay or not, but rather it's about the consequences of taking such a heavy-handed approach as GDPR did.
As others said, the industry had plenty of time to avoid this. For instance, the Cookie Directive is a decade old now, with previous regulations touching this sphere as early as 16 years ago. The industry instead doubled down on user-hostile practices. So now we've got GDPR.
> It's not about whether data abuse is okay or not, but rather it's about the consequences of taking such a heavy-handed approach as GDPR did.
It absolutely starts from "whether data abuse is okay or not". If data abuse is unacceptable, then the question is "what is the law currently doing to address that?" The industry had ample opportunity to self-regulate, but all we've received in breach after breach is long-delayed shallow apologies. The running joke is that Zuckerberg and FB have been on a 15-year "apology tour", giving hollow apologies for seemingly grave transgressions that other people warned about [1].
And given how tech unicorns like Uber treats fines [2], a heavy-handed approach seems to be the only way to actually curb the behavior in questions.
[1] EFF in 2009 warned that companies could take advantage of the leaky Facebook API, predicting the Cambridge Analytica situation 4 years in advance.
[2] In multiple jurisdictions that do not allow ride sharing, Uber has told drivers that they would pay the fines that drivers incur.
> Because it puts a whole bunch of people out of a job and ruins the user experience for many things.
No, really, it is the advertisers who ruined the experience in the first place. And almost every site tries to make you agree to data collection without giving you another option, or hiding the other option under several menus. That's why I closed the TC page.
We’re going to end up with some HIPAA like regulations on collecting data and it’s going to be a massive pain in the ass.
The scary part is the big companies will have the money to lobby for regulations to be written in a way that is easy for them to comply with while shutting out smaller companies and startups.
> We’re going to end up with some HIPAA like regulations on collecting data and it’s going to be a massive pain in the ass.
Yes, that's the point: at the moment, vacuuming up every bit of data is essentially free, so companies take a maximalist approach to it, and aren't paying nearly their fair share of the resulting externality costs. Personal data storage needs to become costly enough that companies are forced to decide if it's actually worth it.
I would also add that, just like every other time an industry failed completely at self-regulation and just ran wild until public outrage forced the government to step in, this wouldn't be happening if companies had been responsible. But CA and Experia and Unroll.me proved that was never a possibility.
Didn't a lot of this privacy stuff start in the EU because some of the politicians didn't like the fact that it's big American companies that are getting all the business, while Europe had basically none of them? You can still see this attitude during Zuckerberg's appearance before the European Parliament when he was asked to name a European competitor to Facebook to show that Facebook is not a monopoly.
How do you think that GDPR is the optimal way to achieve... Well, what exactly are you saying is the goal? You are only alluding, thus letting everyone complete your argument with whatever they may find most credible.
IMO, it's that a) people in EU care more about privacy and b) the political influence of the data processors in question isn't as strong as in the USA.
b) (with swapped roles) is the reason why the diesel emissions cheating and the FIFA bribery cases were investigated in the USA. They were open secrets, but nobody did anything about them in their countries of origin because of regulatory capture.
You can't really say that people in the EU care more about privacy, because it's the EU that brings this action. In the EU we have very poor participation in EU parliamentary elections and the commission is picked in ways that most people don't seem to understand. The 2014 EU parliamentary election had a voter turnout of 42.54%.
The other thing is that the EU parliament has some internal political shuffling that most people aren't even aware of. I don't know anybody in real life that can name even one of the parties (or groups) in the EU parliament. It's very difficult to say that they're definitely doing what the people want.
>How do you think that GDPR is the optimal way to achieve... Well, what exactly are you saying is the goal? You are only alluding, thus letting everyone complete your argument with whatever they may find most credible.
Simply make it more difficult for established/outside companies to target the EU market. It's not a conspiracy, so there isn't like a step by step plan that could be taken by the EU to actually do this. Only some people and groups in the EU seem to be really interested in something like this.
Which could easily happen if the person levying the fine doesn't like you. In some countries the data protection agency has a whopping dozen to two dozen people working at it total. In other words, if it so happens that you don't get along with them then there's nothing stopping them to be harsher on their enforcement.
You can't ask them for advice either. They'll tell you that they're there for enforcement.
My local level of government has this less than two dozen people working in the data protection authority. That is why I'm worried. They are the ones that will be enforcing this whole thing here.
I'm not saying there is no due process, but it definitely is massaged in some parts of it. In the country I mentioned according to the statements of the Ministry of Justice the conviction rate is 99%. Out of 100 cases that go to trial only 1 gets a 'not guilty' verdict.
I was reading straight from the document. I never said minimum either.
This is straight from Article 83 of GDPR:
"Infringements of the following provisions shall, in accordance with paragraph 2, be subject to administrative fines up to 10 000 000 EUR, or in the case of an undertaking, up to 2 % of the total worldwide annual turnover of the preceding financial year, whichever is higher"
Moreover the next fee level is: "20 000 000 EUR, or in the case of an undertaking, up to 4 % of the total worldwide annual turnover"
This law applies only to data brokers in Vermont. From the article:
Data brokers in Vermont will now have to register as such with the state; they must take standard security measures and notify authorities of security breaches (no, they weren’t before); and using their data for criminal purposes like fraud is now its own actionable offense.
It defines data brokers as businesses in Vermont that buy and sell data on people with whom they have no direct relationship. Seems like a reasonable law - unlike GDPR, they aren’t trying to reach outside of their jurisdiction, they aren’t trying to wipe out entire business models, there are no absurd fines, it’s easy to know if you’re violating it (because you would know that you have setup a data brokerage in Vermont), etc. My guess is there are only a handful of “data brokers” as they define that term within Vermont, so this affects very few people.
Your reading that they define 'data broker' as a business in Vermont is not substantiated by the text of the bill [1].
Some excerpts that define some key terms:
"(2) "Business" means a commercial entity, including a sole proprietorship, partnership, corporation, association, limited liability company, or other group, however organized and whether or not organized to
operate at a profit, including a financial institution organized, charterd, or holding a license or authorization certificate under the laws of this State, any other state, the United States, or any other country, or the parent, affiliate, or subsidiary of a financial institution, but does not include the State, a State agency, any political subdivision of the State, or a vendor acting solely on behalf of, and at the direction of, the State.
(3) "Consumer" means an individual residing in this State.
(4)(A) "Data broker" means a business, or unit or units of a business, separately or together, that knowingly collects and sells or licenses to third parties the brokered personal information of a consumer with whom the business does not have a direct relationship."
What you’re not understanding is that this is a state law. They can only regulate the activities of businesses that operate within their own state.
They use the broad definition of a “business” because in the US you can be incorporated in Delaware and actually operate in Vermont. This is actually a common thing - most US corporations are incorporated under the laws of either Delaware or Nevada but operate within their home state. But the state only has the ability to regulate the actual operations of businesses operating within Vermont.
There's lots of ways a Vermont court could apply their laws to a Delaware corporation operating out of Delaware: Renvoi, significant contacts test, "better rule" test, and so on.
This quote only works when you're using other neutral, but unrelated-to-you parties.
Data brokers are scum. They do not produce any value, they do not improve the society, they have no positive effect on anyone. No reasonable person should stand up to protect them from liability.
Agreed. The "didnt stand up for [x]" quote only works when we are passively letting groups be hurt. I would actively help to destroy this business model if possible
That’s a bit extreme. Most of this data gets used for targeted advertising. While that sounds (and sometimes is) shady, it also enables a lot of niche companies that had a much harder time existing before.
For example, what’s the point in showing ads for custom quadcopter parts to an 80 year old grandmother who doesn’t know what a quadcopter is?
Data should be regulated and protected much more than it currently is. But the idea of banning it entirely would have a lot of negative effects. Let’s be smarter than that.
If I need quadrocopter parts, I'll search for quadrocopter parts. I don't need your ads to push quadrocopter parts on me on the hope that I might be slightly more interested in them than an 80 year old grandmother.
I need you to build a good product and display it in a good store.
This really cuts to the core of the issue. Consumers will naturally seek out the things they wish to buy. Nobody needs to be distracted from their activities to be told what to buy.
All advertising should be strictly confined to designated areas specifically intended for consumer searching.
There are a lot of movies I went to after I saw an ad for it which I never would have searched for on my own (and I can say the same about restaurants and a couple other peoducts). Consumers aren't going to search for something they don't know exists in the first place
For ages there used to be dedicated publications for ads -- why is it not possible to push all ads to only be displayed at a dedicated website, journal, newspaper, etc. where the customer(!) (not consumer) can choose to visit if he/she wants to discover something.
All we need is a push towards criminalization of ads in public spaces where they are forced on people not based on their need, but based on their ability to be manipulated into buying shit they don't need.
Yeah, because screw all these free services people can access that are paid for by ad revenue. The usual counter to this is "but I would gladly pay!" The problem isn't you, the problem is with people who can't afford to pay or aren't able to due to other means. You wouldn't have to pay for services such as Google, YouTube, Facebook, Twitter, reddit, hackernews etc just for yourself. Your grandma, your spouse, your kids etc have to all pay for it too.
And suppose that happens. Do you know what the system ends up as? Cable TV. Congratulations, net neutrality was unnecessary after all.
I actually removed my ad blocker recently because I realized most sites I go to either don't have ads or they don't really bother me (hackernews, reddit, facebook, netflix, gmail, stackoverflow, various webcomics, etc...) and they pay for all that free content I consume. My life generally isn't better for the ads themselves but I think it is better for the services that these ad supported sites provide.
I've found many useful products that I enjoy using, such as my ReMarkable tablet, through targeted ads. I'd have never known it existed despite looking/waiting for exactly this type of product for years.
Perhaps this is because I'm not an American, but my life doesn't revolve around "supporting the economy". The economy is supported when I go round living my life, not the other way round.
Exactly. Also: free market works well when the aggregated signals from customers are honest. Then it allocates resources efficiently. Advertising is all about disrupting this mechanism, injecting and generating fake signals that skew the market.
Though sometimes you want to get the word out about a specific thing. But this is where getting people's social circles hearing about something and trying it is important.
I definitely see a few needs for advertisement. But mostly down sides IMO.
I agree!
A lot of what's going on with data is the low-cost ease of intimate access to targeted people. Thus, we get constantly pestered and then it's our job to figure out how to get peace from stuff that we didn't ask for.
I've had to turn off alerts and miss notifications from friends because marketers have run amok. And we go back to the cost: it's been so doggone cheap that so many marketers have gotten to where they just don't care about being annoying.
There should be a higher cost to access me when I'm sitting in my barber's chair. Actually, DON'T contact me when I'm in my barber's chair. Go back to tv commercials, billboards, and snail mail. Put in some effort.
The traditional way to advertise niche products is to advertise in places where the relevant audience is. Maybe put ads on RC-related websites or technology-related TV channels, or buy sponsor spots on technology podcasts or YouTube-channels. You don't _need_ huge amounts of information about every Internet-using individual in order to do targeted advertising.
Podcasts, which (so far) have mostly been free from extreme privacy-violating ad networks, have been doing this for ages; I regularly hear relevant ads for Ting, Linux Academy and Digital Ocean when listening to Linux-podcasts, or for bone conducting headphones, fancy mesh network routers and domain name registrars when listening to general tech podcasts. I also hear ads for PC hardware and VPNs when watching tech YouTube-videos. The ads YouTube itself chooses for me, on the other hand, are generally completely unnecessary and poorly targeted ads for movies I don't want to see or cars or soda.
In my opinion, the advertiser should not know that the current visitor is an 80 year old grandmother in the first place.
Just like TV, radio, and magazines before them, advertisers can select which advertisements to put on which sections of websites (or broadly make their decisions on what types of sections they should display on), and individual display requests can and should be de-personalized for privacy measures. Advertisers can still access or determine indicators of demographics for places on the web without individual tracking measures, and measure click-through for various A/B testing for different ads in different contexts.
The contents & general audience of a page should determine the ad, not the individual visitor.
I am personally happy to see that this happened in Vermont and not on the federal level. We now have the opportunity to see the impact this has on the citizens of Vermont and whether or not the regulations are effective. Nothing beats experimentation and the scientific method.
Data protection laws existed in Europe long before GDPR. Every country had them. And companies would simply disregard them until they became GDPR with the promise of fines and enforcement.
If insurance companies and employers use that information (as well as creditors) it become subject to the FCRA; as well as subject to state, local, and federal laws about lending, insurance underwriting, and employment. This information merely existing (for advertising purposes) doesn't suddenly make it legal to use it in specifically illegal ways.
For example, it's a standard practice to use a person's credit history and credit score during insurance underwriting - data shows people who are responsible with credit have less claims, so they have lower premiums. However, say you live in Massachusetts. In Massachusetts it's illegal for insurance companies to take your credit history and credit score into account when determining policy premiums. So insurance companies are not allowed to access your credit if you live in Massachusetts even though that information is widely available and used in other states.
I'm all for spreading information but I can't help but cringe at the frowny, holy attitude Techcrunch displays here like they just discovered this stuff is happening. Please. Spare me the tears. You knew about this all along, it just wasn't fashionable to write about it before.
For more details, see the Vermont Attorney General press release [1] and the H.764 bill full text [2].
Cynical prediction: Vermont has less than a million residents. It may be both easier (as in less effort) and less expensive, if you're a data broker, to just cease trafficking in the data of anyone tangentially associated [3] with the state of Vermont.
There are some number of companies making the same decision wrt GDPR and the EU, so it wouldn't surprise me. That being said, there will come a tipping point where it _does_ make sense and regulation like this is part of how it gets there.
> That being said, there will come a tipping point where it _does_ make sense and regulation like this is part of how it gets there.
That's assuming the jurisdictions have the same rules as one another. If they have different rules, or conflicting rules, or rules that conflict with some other jurisdiction's data retention requirements, it becomes a huge mess.
And the probability of the US adopting the GDPR verbatim is practically nil.
It may be both easier (as in less effort) and less expensive, if you're a data broker, to just cease trafficking in the data of anyone tangentially associated [3] with the state of Vermont.
Why would that be? AFAICT, this law merely sets standards for data brokers operating within the state of Vermont. It doesn’t try to dictate what anyone outside the state can do with data from people in Vermont, like GDPR does. Given Vermont’s size, my guess is that there are very few companies in Vermont that this law actually applies to.
You're wrong. Having read the bill, specifically page 57 of the Official 'As Passed by Both House and Senate' PDF and its environs, it's clear that they intend to regulate the brokering of information about 'Consumers', where that word is defined to mean "an individual residing in this State."
Meanwhile, 'data broker' is defined as 'business' that engages in the brokering of data (obviously paraphrasing), where 'business' isn't bound by being incorporated in Vermont, but applies worldwide.
I am actually not wrong. You have to understand the context. State laws only apply to businesses operating within the state. States in the US cannot regulate the activities of entities that are not within their jurisdiction.
If a state in the US starts saying it has the power to regulate businesses in other states and countries, that would be news.
Also, please read the article we are commenting on:
Data brokers in Vermont will now have to register as such with the state; they must take standard security measures and notify authorities of security breaches (no, they weren’t before); and using their data for criminal purposes like fraud is now its own actionable offense.
That is what this law does. Nothing more, nothing less.
Read the quote I pasted from the TechCrunch article and you’ll understand how this law works. And no, Vermont has no authority to impose regulations on businesses in California for example, nor were they trying to with this law.
This crackdown on data brokering bothers me. I'm writing from the medical data industry.
Data isn't neccesarily about unwanted creepy stuff. In other industries, customer data can be used to recomend the right stuff, or make the kind of movies people want to watch. That stuff enhances peoples lives - in individually small but additive ways.
In my industry, data really can save lives. For instance the prime minister of my country apparantly wants to apply AI to peoples GP records to spot people who have undiagnosed cancer. This seems pretty viable to me - even in the most simple case you could quickly spot families with high cancer risk, offer the young women in them gene sequencing, and then offer the ones that are carrying super high risk cancer genes preventative treatement. You'd certainly save lives this way.
There are a lot of very scary ethical questions in my industry. Widespread use of genetics to price insurance. Employers requiring employees give access to medical records before confirming employment. And much more.
But I do worry this backlash will make it much harder to do good.
How many businesses have a point of presence in Vermont? I can't imagine many, and if any do, they'd probably just move rather than comply with a law that would harm their business.
This strikes me as being largely symbolic and not likely to have much of an impact, unless other states follow through with similar consumer protections. However if Deleware were to pass legislation like this, that would be a lot more meaningful.
Through the article, I looked at Acxiom and it seems like they have an opt-out form. Who knows if they actually obey it but they probably have your data anyway so it's worth a shot: https://isapps.acxiom.com/optout/optout.aspx
My home country has a Privacy Act, it’s terribly weak and has little enforcement but the principals in it were drilled into me from the very start of my career. Customer data is really important. It must be kept private and not shared. And we should only have what we need to have for whatever we’re doing for them. I don’t know why those principals are so hard for people to swallow. Now in the states when I hear mobile carriers are selling my location data without my consent, and so on, I get completely irate. It boggles my mind why people think that kind of behavior should be allowed.