So what else do you propose we should do? Demand that the companies self-police out of their deep commitment to ethics and communal wellbeing - and then be shocked and outraged when against all odds they don't?
I don't actually accept "the current state of affairs is bad" as a justification for new regulations.
The entire point of e.g. the cookie example is that often regulation is not only annoying but ineffective. You don't get to say "this is bad, self-enforcement doesn't work, therefore we have an obligation to pass a law that also won't work".
I agree that self-policing won't work, but it's entirely fair to say "all of the options I see are unacceptably bad, we need to talk about finding a new path instead of choosing any of these".
> I don't actually accept "the current state of affairs is bad" as a justification for new regulations.
And I don't accept "a previous regulation turned out to be pointless and annoying" as an argument against trying new regulations to fix "the current bad state of affairs." I don't think there's some magical third way to resolve issues with the failure of self-policing that doesn't involve regulation. Such arguments tend to have little positive outcome and usually just serve to discourage any attempts to actually fix problems.
Law, regulation, and enforcement are the solutions to reconciling the mismatch between selfish bad behavior and the public good. However, these regulations will undoubtedly have to be iterated on until a workable and effective solution is found.
> I don't think there's some magical third way to resolve issues with the failure of self-policing that doesn't involve regulation.
I'm still hopeful about finding a magical third way, but I admit I'm short on answers. "Find a tech solution" is the common one, but it feels like a coinflip at best - with smart people putting their talents on both sides of anonymity, there's no real reason to count on an eventual victory for privacy. (And, troublingly, each retrenchment seems to raise the skill requirement for privacy - it's entirely out of reach of most average web users.)
> Law, regulation, and enforcement are the solutions to reconciling the mismatch between selfish bad behavior and the public good. However, these regulations will undoubtedly have to be iterated on until a workable and effective solution is found.
And here's where we differ, I guess. I don't think that the cookies regulation is a one-off case of "pointless and annoying", nor do I think it will be iterated on until an effective solution is found. I think "iteration until success" is entirely absent from the history of tech regulation.
I think that regulators are currently too slow, uninformed, and biased to effectively address any but the most egregious technological issues - unless the right answer is "outlaw any behavior even vaguely resembling this" the task is simply beyond the bodies involved. The number of well-crafted, impactful, unsubverted laws protecting computer privacy and security is approximately zero. I think that this state of affairs can't be substantially changed with outreach or lobbying, but will remain a basically fundamental aspect of first-world governments for at least a decade.
---
So... I guess my answer for now is hopelessness. I agree that the current state of affairs is bad, self-policing is hopeless, and no third solution is forthcoming. But I also think regulation is generally both destructive and useless, which is marginally worse than nothing at all. I guess that too is "discouraging any attempts to actually fix problems", but I'll be damned if I see a way out.
In spirit of findings new paths, just an idea (and I'm not sure at all that this is a practical solution):
fail2ban---a tool commonly used in servers to automatically block remote attackers such as spammers---can be configured to automatically send an attack report to some e-mail address when it detects an attack.
A browser extension, such as PrivacyBadger, could do a similar thing when it detects a tracker. Let the report be sent to a law enforcement agency that is setup to deal with this kind of problem. If this agency collects enough data, then it can do some data mining to find, for example, the most violent trackers. If that works, maybe such a tool can be used to give law enforcement some actual power to punish these companies.
I'm not sure how practical the "to law enforcement" stop is here, for two reasons. First, lots of these trackers are based in places where their data collection-and-sales are legal - the EU makes decent efforts to pursue foreign actors, but it's a serious limitation nonetheless. Second, it's not clear how often illegal action can be demonstrated. The authors of the DefCon presentation here only identified one of the ten extensions they suspected because the rest couldn't be identified beyond reasonable doubt; with data being gathered by so many actors identifying one is hard.
Even so, producing a bank of highly-suspicious extensions could be truly valuable. Law enforcement (sometimes) has the resources to pursue these things beyond a DefCon presentation - honeypotting might make it possible to advance suspicion to proof by providing fake data to specific trackers. And publicizing high-likelihood suspicions might produce publicity against bad actors and moves to transparency and guarantees by better actors.
Sometimes you have to accept that legislation isn't going to fix a problem. Piracy, spamming, and so on aren't going to go away, even if we catch people periodically.
You can certainly get the largest firms to comply, but the internet is full of people trying to trick you with fraud and viruses. You're not going to make those people care.
> And sometimes you have to accept that there will be zero change until legislation hits the big companies with an even bigger stick.
Were the problem parties in this article the "big companies"? Nope. They're already flouting existing laws. Are new laws going to stop them when the existing ones don't? Obviously not.
Regulations can't defend hordes of gullible suckers.
It's a two part problem: Consumers need to be equipped to defend themselves with basic critical thinking, in addition to regulation that punish abusive entities.