I've harped on this before, but I think it's relevant here, too-- I'd love some kind of FLOSS sandbox, FLOSS centralized social network, and FLOSS "good-faith" adversary group to help invalidate ineffective and ineffectual software/tech.
If someone starts down the rabbithole of encrypted email, Tor, key-signing parties, etc., there should be a safe place they can go (physically and/or online) to get absolutely walloped by good-natured hackers who will do everything short of permanently harming the person's hardware/reputation to show them they don't know what the hell they are doing.
Without this, these efforts will fall flat because the users (and even devs in many cases) aren't doing a good job of testing whether their systems work. The only sure way to test them is to get the internet to hate you, and that's too big a price to pay for testing a privacy system.
It's a bit like someone running tests and celebrating their algo as the most performant without realizing the compiler simply optimized out the algo. Chances are that person is going to have a bad time when the thing goes into production.
There's also a fringe-benefit, which is that users/devs who don't give up and continue getting walloped will quickly realize the value of credential revocation and tell others about that value.
Edit: just so it's clear-- the point is for the user/dev to be able to enter the group/place when they wish to be taught, and leave when they are done. That's the current problem with testing by making the internet hate you-- you can't turn it off.
Yeah, I'd like to see something like this too. The TV show Hunted might be a real life opsec version to some degree, but a system which is online and open to anyone who wants to participate here would be a lot better. Could even be tied into a bounty system or something, where you put money in and then ask the site to share it among whichever people manage to find out your personal information/bypass your systems.
The problem with privacy is that there's no general consensus on what "privacy" entails. People bloviate about it being a "human right", or even a "fundamental" human right, but in reality, what we probably want is "not to be judged unfairly" and privacy is one possible means to that end.
You can't have a "right" to have other people not think about you.
For instance, no one would care about medical privacy if employers and insurance companies didn't take actions on medical conditions.
You can see this nicely illustrated in medical talk radio shows. People will call into these things and go on about all the puss and lesions and other disgusting things they have, because that's considered acceptable in that context. But normally people would be mortified if that information got out.
For instance, no one would care about medical privacy if employers and insurance companies didn't take actions on medical conditions.
I do, and so do many other people. It's very common, even in situations where employers and insurance companies aren't a consideration, for people to keep medical issues to themselves. It's common for people to present a more healthy version of themself to the world, even if asked.
People will call into these things and go on about all the puss and lesions and other disgusting things they have, because that's considered acceptable in that context. But normally people would be mortified if that information got out.
That sounds like selection bias. What proportion of the population wouldn't phone in?
Encryption deals with secrecy, though. We do talk about "private" and "public" in encryption, but that's an abuse of the terms. "Private" is shorthand meaning inside a secure perimeter, and "public" means outside the secure perimeter.
It's more interesting to differentiate the social meanings of "private" and "secret": When I send a letter in an envelope, it's private. It's not for your eyes. It might not be secret, however, because I might not mind the information in the letter being publicly revealed.
That distinction between "private" and "secret" is something the opponents of encryption forever try to erase, by claiming that things which are not secret should not be private, which has the effect of making privacy suspicious by default, instead of unremarkable. It's the snail-mail equivalent of trying to shame everyone into sending their mail using postcards, so someone who sends "a lot" of envelopes is now a target of suspicion, for some arbitrary definition of "a lot" which changes based on context.
It's still private, it is no longer a secret. It hovers somewhere between being a 'public secret' or 'public knowledge' without the general public being aware of it because it hasn't been published yet. And in that sense the real world is very much different than the web, there everything that is not explicitly protected is published.
Privacy is an info coordinate on the Action axis and the Actor axis. There's the intended coordinate, and there's the actual coordinate. If an Actor performed Action at an intended coordinate, when can the actual coordinate be allowed to differ?
For example, Action (posting this comment), Actor (me). Intended (Action=High, Actor=Low), Actual (Action=High, Actor=Low).
I intend the Action to be publicly available over the internet for anyone to read, and it is unlikely Dang will delete this comment. I intend the Actor to be hidden, because this account is a throwaway and has no other identifying comments. My VPN's ip is in HN's private logs which are unlikely to be leaked.
Some scenarios are clear:
- A murderer intending for (Actor=Low) should have (Actor=High) and be caught
- Using the restroom intending for (Action=Low) should have (Action=Low).
Other scenarios are more nuanced:
- Offshore accounts (Actor=Low) leaked by journalists in Panama Papers (Actor=High)
- Business trade secrets (Action=Low) revealed during product liability trial (Action=High).
If someone starts down the rabbithole of encrypted email, Tor, key-signing parties, etc., there should be a safe place they can go (physically and/or online) to get absolutely walloped by good-natured hackers who will do everything short of permanently harming the person's hardware/reputation to show them they don't know what the hell they are doing.
Without this, these efforts will fall flat because the users (and even devs in many cases) aren't doing a good job of testing whether their systems work. The only sure way to test them is to get the internet to hate you, and that's too big a price to pay for testing a privacy system.
It's a bit like someone running tests and celebrating their algo as the most performant without realizing the compiler simply optimized out the algo. Chances are that person is going to have a bad time when the thing goes into production.
There's also a fringe-benefit, which is that users/devs who don't give up and continue getting walloped will quickly realize the value of credential revocation and tell others about that value.
Edit: just so it's clear-- the point is for the user/dev to be able to enter the group/place when they wish to be taught, and leave when they are done. That's the current problem with testing by making the internet hate you-- you can't turn it off.