Hacker News new | past | comments | ask | show | jobs | submit login
Help make mass surveillance of entire populations uneconomical (prism-break.org)
707 points by doener on May 1, 2023 | hide | past | favorite | 273 comments



In general it's a good thing that more and more people are aware of the necessity for good security practices for all online interactions - but the belief that individual technological efforts can defeat large-scale corporate and nation-state monitoring is pretty silly. At best you'll just have an added layer of security against things like theft of credit card information by criminal gangs.

If you actually want to do something like communicate with a journalist while hiding your own endpoint from exposure you have to go to fairly ridiculous lengths, such as acquiring a laptop used only for that purpose and which has no associated identifying information, use random open Wifi networks to log onto, and have a decent understanding of the concepts of public-key, asymmetric and symmetric cryptography.

Note that there is simply no way for two known parties on the internet to hide the fact that they are communicating with one another from government-corporate managers of the Internet - although it's possible to keep the content hidded, to some extent, unless your passwords get compromised, which seems fairly easy to accomplish for such actors via keylogger malware installed through backdoor attacks using secret zero-day exploits and so on.

The only real solution is the passage of data privacy laws that provide criminal penalities and which allow class-action lawsuits against corporations and governments that engage in warrantless mass surveillance or the retention and aggregation of customer's personal data in searchable databases.


This site and many guides like it are intended to help people avoid mass surveillance rather than targeted surveillance. Confounding the two threat models seems intended to confuse and exasperate people.


So, let's say the NSA is collecting data on every person on the web, and they're able to see who is using these 'mass surveillance avoidance tools' and who isn't. The former category then actually stands out and becomes targets of more intensive surveillance because they're using tools that allow them to hide surveillance to a limited extent. Using such tools would flag the 'strong-selector' metadata collection system for further (targeted) examination, i.e.

https://en.wikipedia.org/wiki/Turbulence_(NSA)

This is of course what an outfit like the STASI or Gestapo would do, isn't it? If you're actually trying to hide from surveillance, the best tactic is to hide in plain sight, maintaining a cover story consisting of bland normal online presence that doesn't draw extra attention.

Of course living in an authoritarian panopticon and having to hide in this manner is an undesirable situation, and the solution is not technological, but rather political in nature. One basic issue is transparency, i.e. the public should be able to see what the intelligence agencies and corporations are up to with their surveillance programs. This is why Snowden's exposure of PRISM, XKEYSCORE, TRAFFICTHIEF, etc. was in the public interest, i.e. legitimate whistleblowing.


These are good points, but political solutions (by which I mean political changes within the system) are almost certainly never going to happen. More unrealistic than a technological solution addressing this, even.

Instead, social/cultural solutions might be the key. If only a few people use these mass surveillance avoiding tools, then yes, they become targets. But if almost everyone uses them and they become ubiquitous, the landscape changes some.


> These are good points, but political solutions (by which I mean political changes within the system) are almost certainly never going to happen.

I don't think political solutions are impossible, but if they are then our government is incapable of executing the public will. I think the key to generate this type of change is to tell a very compelling and broad story about why the current situation is unacceptable. Discussing {history lesson} or {personal security risk} doesn't seem to be a strong enough narrative. A very strong narrative can turn public opinion and force action by lawmakers. Over the last 100 years there has been a number of examples of popular opinion becoming so massive that the political system has to do something they clearly did not want to do.

* The draft is now reserved for emergency use only. Previously it was used for Korea and Vietnam, which were more about global power projections than direct threats to the US.

* The role of the US Military is moving away from World Police and limiting itself to more directly protect American interests. Troop deployments are highly scrutinized by the public and impact Presidential approval ratings.

* Cannabis went from the poster child for war-on-drugs to essentially unenforced federally and openly cultivated/traded/consumed in large regions of the country. Rules on Magic Mushrooms, MDMA, and Ketamine are beginning to loosen to.

* The end of COVID lockdowns and mask mandates in the US was largely determined by grassroots actions instead of top-down decisions.


>but if they are then our government is incapable of executing the public will.

I don't think it really even tries to.


Quite right. Look no further than Snowden to understand that our legislature is complicit in these deep-state surveillance activities. They don't even want to change, let alone have the ability to.


I think the line between political solutions and technologial/cultural solutions is quite blurry.

To get past those "using these tools makes you suspicious" phase, you have to convince everyone to both care and to use the tools.

Once they care that much, the political solution is also much more feasible.


The logic that anybody we can't see should be a suspect would then target our grandmas with landline phones who buy their groceries with cash or live in nursing homes. It would be a colossal waste of resources and detract focus.

The methods of the STASI were extremely crude and different to what is available today. They relied on human informants and collected lots of paper.


Or they can just filter for age, likelihood of technical proficiency (indicated by such things as education, prior employment, family, peer group, etc), and likelihood of “effective political concern” (or whatever we might call a person’s affinity for independence, skepticism, distrust of authority, knowledge of past authoritarian transgressions, knowledge of current authoritarian capabilities, and access and willingness to non-technical resources, eg time or money, needed to act on their concerns)


Here is where it kind of comes full circle on mass surveillance. If you do not have those data points, you cannot filter on them. Even if someone was once sloppy on real ID social media platforms, old data is less useful.


Good thing I only have a Google account and a bank account and link nothing else to me online. Try as I might, no search engine seems to know I exist.

Sure some genius might think their model has got me nailed but even when I had a Twitter a decade ago and asked for my data Twitter had my age, gender (details I had in my profile btw) and number of kids I have wrong.

The problem with tech companies is they’re run by people and people are pudding brains who believe in magic. The problem with AI, as Chomsky said, is people will believe it. It’s true because they’ve been believing these other terribly inept systems they built, whether technological or social.


Agreed, but they're not mutually exclusive.

The political solution requires a political consensus, and that's unreliable.

Defending your liberty with tools you own requires a social consensus, and that's unreliable.

Both are subject to change over time, thus both need to be worked on and managed.


That's why those of us "with nothing to hide" should use these tools and others like Tor. Drown any potential signals with plenty of noise.


Your point is valid, but in this context, going to the next level of targeting will require them to probably burn 0day to achieve it. If that's the case, not even the NSA can afford to do that en masse. And if they did for some reason decide to make that policy, it would be a gold mine for foreign governments to setup honeypots to collect every 0day in the NSA's arsenal like pokemon.


I think the lesson to be learned from e.g. the Cambridge Analytica scandal is that given enough computational power, mass surveillance is indistinguishable from targeted surveillance.

I feel like I should be putting on my tinfoil hat saying all of that, but the reality is that these systems are less and less throttled by the availability of human brains to process the data the automated surveillance systems collect. We made a mistake in thinking that labor costs could ever be an effective guard rail for these tools.


> This site and many guides like it are intended to help people avoid mass surveillance rather than targeted surveillance. Confounding the two threat models seems intended to confuse and exasperate people.

The important thing to keep in mind - and one that I find most nontechnical people don't realize - is that over time mass surveillance and targeted surveillance trend to being the same thing.

Centuries, or even decades, ago mass surveillance was a dictators dream but merely a fantasy. There was no way to do it, not enough people or time, so it was impossible. You could only do targeted surveillance against selected groups or people.

With current technology these are starting to merge. You can actually spy on everyone all the time and store it for later perusal whenever you need to perform a dragnet search in the future.

We're not there 100% quite yet but every year more activities are online and more bandwidth and storage capacity makes it more and more viable to monitor everything and everyone all the time.

The legal and cultural framework to deal with this does not exist. Laws and mindsets are still focused only on targeted surveillance and cover things like search warrants.


Or at least intended to piggy back their own cause onto a superficially related effort.


> Note that there is simply no way for two known parties on the internet to hide the fact that they are communicating with one another from government-corporate managers of the Internet

Not entirely true. I could post a message on a popular forum like HN, where the message contains a hidden message.


Steganography is a real thing. I've often wondered about those meme powerhouses, like on Facebook.

I used to collect thousands of memes and just blast them to my mother indiscriminately. Then I wondered whether silly-looking memes could be carrying secret messages, or just nasty hidden stuff. I decided to stop helping traffick in that stuff.

Has anyone read/seen Mother Night? That's a real good example of how secret communication can hide in plain sight.


Steganalysis is the process of trying to separate cat pictures from cat pictures with steganographic information added. It's quite possible to do this well under ideal conditions -- where you know the proportion of the two a priori -- but much harder in real life.


> Has anyone read/seen Mother Night? That's a real good example of how secret communication can hide in plain sight.

Are there any confirmed examples from non-fiction?


In World War II, German spies used a technique called the "microdot" to embed secret messages within seemingly innocuous documents. The microdot technique involved shrinking the text of a message to the size of a small dot (about 1 millimeter in diameter) and then placing it within the text or image of a cover document, such as a letter or newspaper article. The recipient would need a microscope to read the tiny message.

The Least Significant Bit method is used frequently for the legitimate use of watermarking image, video and audio IP. It is a simple technique that embeds the watermark data into the rightmost bit of a binary number (LSB) of some pixels of the cover image.

It is also very common for malware to hide it's configuration data or payload within image files. (ZeusVM, Zberp, NetTraveler, Shamoon, Zero.T)


In one of those historic ironic twists, the technology the Germans used to make microdots was created by a Jewish inventor, Emanuel Goldberg.


In 2010, the Colombian government commissioned a pop song called "Better Days" that received nationwide airplay. Hidden within the song was a Morse code message for FARC hostages (some of whom were soldiers and trained in Morse) that help was on the way.

https://www.bbc.com/news/world-latin-america-63995293


https://www.justice.gov/opa/pr/new-york-man-charged-theft-tr...

> The criminal complaint alleges that on or about July 5, Zheng, an engineer employed by General Electric, used an elaborate and sophisticated means to remove electronic files containing GE’s trade secrets involving its turbine technologies. Specifically, Zheng is alleged to have used steganography to hide data files belonging to GE into an innocuous looking digital picture of a sunset, and then to have e-mailed the digital picture, which contained the stolen GE data files, to Zheng’s e-mail account.


On the other hand, he did get caught...


If they didn't we would never know, would we? We must do something like the Border Patrol does when they seize like 1kg of drugs and use that to assume that 1,000kg got through.


Yes.


Yeah the parent's assertion seems incorrect. It's totally possible to hide such messages on the internet.


Basically a digital dead drop.


Not to mention literal dead drops with encrypted messages on thumb drives. Etc.


Maybe that explains some 'word-salad' speeches by our VP. She really is sending a hidden message to somebody who has the secret decoder ring. Then again, maybe not...


Most media now have secure drop and guides on usage.

In the UK atleast such as this BBC page[0]. As do the Guardian, Bloomberg and many more Im sure.

I appreciate that it is an involved process as you say but it doesn't seem excessive especially if you can use your smartphone now that tor browser is on android and iOS.

[0] https://www.bbc.co.uk/news/uk-60972903.amp



This might hinder small time criminals and companies at best from finding out who snitched on them. But in an authoritarian regime with state level resources or just a sufficient level of corruption or even just a media corp run by boomers that is vulnerable to phishing, you can't count on discretion for these things. Secure tunnels and end2end encryption are worthless if the endpoints are easy to compromise. The above comment is right that at the very least you should use bespoke hardware that was never associated with you or anyone you know in any shape or form (in addition to the things mentioned on that site). And even then you'd have to make sure that the info you leak can't be traced back to you, at which point it becomes a game of intelligence and counter intelligence. For example, if an organisation suspects their people are leaking info to the press, it could begin to place targeted (mis)information among employees to uncover them. This was done at Tesla last year to track and eventually bust leakers.


It's dangerous to assume phishing vulnerability is solely a Boomer thing. Tech literacy is unevenly distributed even among younger generations, and the upcoming generations that grew up on Chromebooks and tablet computing aren't that much more tech literate than old folks on the aspects of OpSec that matter. "Kids these days" don't even really understand how file systems work.


The laws don't really stop it either. The 4th amendment in the United States hasn't prevented huge dragnet style data collection and partnerships with private entities to provide access to whatever data the government wants.


Laws like the Bank Secrecy Act where passed specifically to state that 4A doesn't apply if the data is collected from a third party and not the individual directly. Nonsense but we're here.


BSA was an outgrowth of Third Party Doctrine was it not?

They already had law enforcement integration and judicial precedent dismantling the 4th Amendment.

...Now that you have me thinking about it...sigh... Yeah. I think I see where you are coming from. Funny how you can take an Act name, and map the actual outcome to the exact opposite of what the name would imply in layman's terms.


they wrap themselves in the flag while pissing on the constitution.


The only real solution is the passage of data privacy laws

AFAIK governments empower specific agencies and groups with qualified immunity. How would such laws be enforced if an agency has immunity?


The expression of power is in who gets to decide the exception to the rules. Real power is rarely beholden to rules. That's why whistleblowers who call out illegal programs are treated like the criminal, because the laws essentially don't matter when dealing with things at that high of level.

Powerful people can lie, cheat, and steal and face zero repercussions. They hold institutional power so groups like the police will protect them regardless of laws being broken. It's not illegal for a corporation to either literally or metaphorically kill someone, because there is no body that will hold them accountable, but it is illegal to assassinate a CEO and systems will pull all stops to hold the assassin accountable.

Its the real reason why Western style democracy ends up being a busybox for people who like rules. The people who can grant endless exceptions have addresses and beds where they rest their heads but people without power cannot decide on an exception to the rules, regardless how dangerous and damaging that person is.


In the U.S. qualified immunity is a creation of the judicial system, and those decisions could presumably be reversed by statute if the political will comes to exist.


No need to invoke qualified immunity; the data privacy laws that have been passed (e.g. the GDPR) make explicit carveouts for government surveillance. Yes, the carveouts are for the jurisdiction's own government only, but that's the one you should be most worried about mass surveillance from in most cases.


Arguably, government surveillance is one of the main points of regulations like GDPR. Data residency requirements make it a lot easier to do that.


>High ranking member of political party 1 does something illegal.

>Huge stink and nationwide conversation ensues.

>High ranking member of political party 2 does the same damn thing.

>Crickets.

You can even reverse the order of the events or parties. It happens a lot. Such laws, unfortunately, simply become political tools.


This depends on your filter bubble.


> but the belief that individual technological efforts can defeat large-scale corporate and nation-state monitoring is pretty silly.

Nation states may have a lot of budget, but they still have a budget. Mass survelience needs to have low per user cost to succeed. It is entirely reasonable to assume small changes if widely adopted could make mass surveilence ecconomically unfeasible.


The only real solution is the passage of data privacy laws

Even your own example — a whistleblower talking to a journalist — illustrates that the fear is not of people who abide by laws, but people and organizations that don't care about the laws.

I'm not saying that there shouldn't be laws. But like almost everything involving human beings, the solution is not an if-then binary choice.

You have laws, but you also have mitigations.


You mean you want the same government that is interested in putting back doors in phones and other surveillance techniques to pass laws that keep them from doing so?


Much of the progress that has been achieved by governance was opposed by those with power.

Your view point sounds like we should give up on any form of legislation the rich and powerful would not like.


What progress in the modern era with respect to privacy has the government done?


Every organisation I've worked in has had comprehensive processes for deleting personal data and only storing what is necessary due to e.g. https://www.gov.uk/data-protection

Without such laws, every business would store as much data as they could indefinitely, because $$.

All of this is besides the point that was made; your fatalistic view point suggests we should make no attempt at effective governance because there are powers that oppose us. There are countries which are more democratic than others, and the population of such countries tend to have a better quality of life and more rights.


Yes, our government is not a single homogenous entity. We can theoretically (and sometimes actually) use our legislative representatives to change the behavior of other parts.


You realize the legislators are the ones asking for backdoors because - “terrorism“ and “think about the children”.

When has the government ever wanted less surveillance power or less control over the internet.


The clipper chip idea flopped, right? So have a few other stupid, draconian, privacy-defeating bills since.

But yeah, it may not be realistic to think that we can stop the expansion of surveillance powers for TPTB and erosion of rights for the average citizen, given the consistency and persistence of the proponents of such crap. When I look at trends of the past 20 years, it seems like wherever the law has fallen short of placing everyone under a microscope, private industry has conveniently stepped in to become the 1984-telescreen service providers instead of the government.


Corporations are the governments private enforcement contractors. They’ll control the internet and we’ll work for them, and they’ll work for government.

See! Government involved in your life!

Nice job building a firewall between society and government control.

At the end of the day it’s all people. The semantic bubbles do nothing to change its all just people looking to externalize effort to live simpler themselves.


"the government" is not one person with a concrete ideology, it is an amalgamation of hundreds of people who all want different things and are theoretically beholden to their voter base.


"The government", like any other large corporation, is an entity in and of itself, and not a mere aggregation of the people who run it. It has institutional culture etc, and given what it is, said institutional culture tends to lean heavily towards a highly centralized, hierarchical approach to anything it perceives as a problem; and tends to perceive anything that threatens the applicability of this approach as a problem in its own right.


The government in the US is not beholden to “the people”.

Because of the setup of the electoral college, 2 senator per state where RI has the same number of Senators as California and gerrymandering, it is very much about the will of the minority.

That’s not to mention all of the things that get done by unelected officials and judges with lifetime tenure.


A, the optimism of youth!


We should push for laws and resist new acts that curtail our rights of privacy and free expression, but that is not a solution. We are generally on our own in making our choices of technology to use. If you go on using proprietary services and networks hoping that someday laws will suddenly fix all the problems, you are seriously deluded or naïve.


You mean... like some sort of a... a Constitution...

One with lines like:

"The Congress shall make no law..."

"The People shall..."

"...shall not be infringed."

"All powers not mentioned here are reserved for the States, or ultimately, the People."

We're beyond the point where good faith can be assumed, and we're down to brass tacks. Our judiciary has shown they are more than willing to creatively reinterpret precedent as they see fit. Our Executive is acting more and more like a dictator with an entire corpus of executive based lawmaking at his disposal (Administrative Law). The Legislature has abdicated responsibility for reigning in excesses in the interest of the little people rather than established incorporated interests and high value donors.

And the People are left with a choice. Amongst all this dysfunction, what should they do?


The problem you describe is far more pervasive than that: https://magarshak.com/blog/?p=362


Why isn't full transparency and an end to criminality a viable solution?


It might be, if it also applies to everyone in the government. Then all of us can keep them accountable.


> The only real solution is the passage of data privacy laws that...

Even the GDPR with its huge impact and global implications does not apply to law enforcement agencies. So I wonder who would make such a law?


The main problem I see is that people are completely distracted by privacy from corporations - when what we really need to be worried about is privacy from our own governments.

So much ink is spilled talking about cookies, ads tracking, etc. But really what's the worst a corporation is going to do? Try to sell you something?

Meanwhile, we continue to allow our governments to regulate and legislate ever more intrusive invasions of our privacy. And they can put us in jail, or worse.

This also gets blurry as governments take increasing control of companies, to the point that some are just about arms of the government, surveilling us in ways that the government can't (yet) do on their own - and being forced to pass that data to the government under penalty of law themselves.


Until these companies turn around and sell that data to the government, which doesn't require a warrant since the company is volunteering to provide it, and if they don't want to sell it, the government will happily use one of it's loopholes around warrants to demand it anyway. The government does this constantly with location data[1], browsing history[2], license plate scanners[3], and more.

We should be pushing to close these warrantless search loopholes, but in the meanwhile the only pragmatic way for an individual to maintain privacy is to prevent any and all third parties from collecting the data to begin with. After it has been collected, you have no control and no reasonable expectations of how it will be used.

[1]https://www.eff.org/deeplinks/2022/06/how-federal-government...

[2]https://www.nbcnews.com/tech/security/can-government-look-yo...

[3]https://arstechnica.com/tech-policy/2020/07/cbp-does-end-run...


> prevent any and all third parties from collecting the data to begin with.

This is what I have been doing. What can you add to this?

On phone disable Bluetooth, disable precise location, disable location and infrequently turn that on for something like photo geotag, use carrier phone number for nothing and use phone numbers from googlevoice or others. Put cars in LLCs or Trusts with address at POBox or UPS. Never use home address for mail unless it is from family. Put utilities in name of LLC or Trust.


> Until these companies turn around and sell that data to the government...

Yes, the larger issue is that the government has just outsourced a lot of the work to corporations to get around the Constitution. If there were any integrity left in the US government, there would be a reckoning about this. We worry about regulatory capture, but the bigger problem is deep state/military-industrial complex capture.


I disagree.

Companies are building surveillance infrastructure that is:

* way ahead of governments in terms of technical capability (NSA and top-level intelligence agencies are outliers, but your average government IT departments are too incompetent to be of any threat)

* widely accepted and not regarded as malicious - not even the NSA can get people to voluntarily include some malicious Javascript on the vast majority of public-facing webpages, yet Google Analytics managed exactly that

* profitable and self-sustaining - the government doesn't have to spend money on building and maintaining it, nor needs to justify its budget/spending

Those companies however are still at the mercy of governments, either via violence/coercion (in the US, they have to obey a national security letter by law, or armed goons will show up) or mutually-beneficial relationship (a lot of companies either outright sell this surveillance data to the highest bidder, or don't outright sell it but will be happy to let the government in on it in exchange for a good relationship and favors in the future).


I'm completely uninterested in the distinction you draw here.

Actually, several distinctions. What do you mean 'our OWN governments'? This is a world where hostile foreign governments can wreak absolute havoc… including by popularizing arguments literally the same as the one you're making, for the purpose of undermining that government and fomenting revolution for their own selfish, imperialist purposes.

I can think of two great powers (okay, one formerly great) actively doing this within my lifetime, and the formerly great one was doing it as hard as it possibly could, within the last ten years, and is still doing it.

I don't trust your argument at all. You're leaving out significant things, conveniently.


The difference between my own government and a foreign government is twofold: 1. It has always been illegal for a foreign state actor to surveil me, and in any case has no authority over me and can't put me in jail (as long as I'm not in their country). 2. My own government is legally entitled to surveil me and collect my personal data, and can indeed put me in jail.


It is not illegal for a foreign state actor to surveil you. In fact governments sign agreements with other governments for them to surveil you while we surveil their citizens and trade information. This gets around the illegal act of government spying on it's own citizens.

Your government mass surveil's foreign citizens. But they can't mass surveil citizens legally.


It has always been illegal for a foreign state actor to surveil me

This elicited a chuckle, though probably not for the reasons you intended.


>So much ink is spilled talking about cookies, ads tracking, etc. But really what's the worst a corporation is going to do? Try to sell you something?

Cooperate with domestic and remote governments, work with the deep state, influence elections and work with candidate teams, and so on. There are also companies with more reach and resources than entire countries.

Plus, corporations have been known to downright spy, threaten, beat up, and murder people when multi-billion interests are threatened (e.g. by local populations wanting clean water or better working conditions).


You write this as if corporations don't lobby on behalf of their owners, and as if 'trying to sell you something' couldn't be any more dystopian than someone at a store counter greeting a prospective customer.

Corporations lobby governments to provide security, in part because some corporations want to sell security products and governments (at multiple different scales) are the biggest customers for that. Those who do not themselves sell security sometimes demand security not so much for the safety of customers and staff as for the customers that they would like to have. Security is big business. Before dismissing this as some marginal phenomenon, you might want to reflect on the proportion of the economy that revolves around security. Though a few years old now, this article raises a number of surprising questions that I've yet to see effectively answered: https://www.brown.edu/Departments/Economics/Faculty/Glenn_Lo...


What sort of argument is this? I prefer a corporatocracy to a democracy? You elect officials for your government, you have no say in what Google does.


While corporations aren't as powerful as the government, they use data for more than "trying to sell you something."

Network effects cause society to coalesce around the same large corporations for social media, online shopping, payment processing, etc to the point that it can be hard to function in society without their services. Once their services are used by virtually everyone, their governance becomes governmental in its impact. On a weekly basis we see programs like the app stores, ad markets, search algorithms, and payment processors enforcing opaque policies that close businesses and end livelihoods, all based on an automated interpretation of the data we share with them.


> The main problem I see is that people are completely distracted by privacy from corporations - when what we really need to be worried about is privacy from our own governments

Governments have grown to rely on corporations to spy on their own citizens, so being worried about corporate surveillance is being worried about government surveillance.

However, between the two (for the vast majority of people), corporations pose a more realistic threat than governments do.


You speak as if corporations are separate from governments.


I did allude to the gap there closing. I still see them as distinct in most countries, including my own. But I fear we're allowing the gap to close further.

As an aside, I think a lot of people want this gap to close, but for entirely unrelated reasons more related to political and economic goals, with the loss of privacy and individual autonomy being an unconsidered consequence of this.


True. For all their flaws, Google doesn't have the ability to send men with guns to my house to abduct me if I don't pay them 50% of my income.


But Google built a surveillance machine much more advanced than the gov can even dream of, so the guys with guns just have to go to Google first to get your data and then they can go to your house.


But if your phone OS / cellular firmware is compromised then e2e or even at-rest encryption won't matter. Anything you can see on your phone can be seen.

I think a more rational alternative is to consider that everything except your unexpressed thoughts and emotions is already logged. At some point, this will become true (if it ain't already), so....then you at least will be ahead of that curve.

So if everything you do is monitored, how do you achieve privacy in such a world? That is the question, I think.

In fact, it's similar to how a corporation or nation needs to think about protecting their own secrets. They have to assume compromise (of people, systems, etc)...how do you confuse and compartmentalize what you want to protect?


Don't let perfect be the enemy of good. The likelihood and prevalence of deeply low level monitoring is orders of magnitude less than the likelihood of using modern apps and saas where is virtually guaranteed. It's an additive game and you can dramatically reduce invasions, even if you can't eliminate them.


See https://news.ycombinator.com/item?id=35698547

Even at the hardware level we have real examples of exfiltration.


While perhaps true in many cases, this example was untrue: https://blog.brixit.nl/nitrokey-dissapoints-me/


Nitrokey article improvement: https://twitter.com/grapheneos/status/1651601840520278018

> Per our request, NitroKey has fixed one of the main issues in nitrokey.com/news/2023/smar…. XTRA downloads are done by xtra-daemon in the OS, not firmware. It also does use HTTPS by default, but the OS can override the default URLs via gps.conf and some OSes do override to HTTP URLs ... NitroKey is correct that xtra-daemon has support for sending information on the device including device model, serial number, etc. They're also correct that the user is never asked about it. It's less of an issue than SUPL which sends nearby cell towers, phone number and IMSI.


> likelihood lower

Not if we take the lore around mass survey into account (Snowden etc)


My guess is that US surveillance and intelligence agencies operate like many other large government beaurocracies: with high levels of administrative incompetence, mismanagement of resources, unscrupulous decisions in the service of individual risk -avoidance/status-seeking, etc.

I suspect Snowden does them a service by making us all feel like we're under a big, scary, watchful eye.

In actuality, they probably do collect unholy amounts of data, perhaps even illegally... But they probably don't know how to process it effectively and if they do, they probably struggle to turn those insights into action.

It's easy for them to collect Facebook friends and Google searches, not Signal messages and DuckDuckGo queries made in an incognito window. It's easy because they can demand tech companies "hand it over" without actually hiring and assigning the skilled and creative people they'd need to figure out how to overcome the modest obstacles privacy-conscious people erect.

If you're a decision maker in one of these orgs, you'll be recognized and praised for cheaply trawling for privacy illiterate pedophiles and easy-to-identify, state-affiliated Twitter bots.

OTOH, you might not get promoted for spending hundreds of millions of dollars breaking into end-to-end encrypted convos only to find out that 99.95% of the data is benign and the .05% that's substantively suspicious requires a warrant and an FBI surveillance team to gather enough evidence to actually go to trial.

tl;dr: cheap measures to protect your privacy probably go much further than you'd think, because while they can be overcome individually, they're not worth overcoming at scale.


The idea that you can assume privacy because whatever country or corporation surveys is incompetent or bureaucratic sounds reasonable, but what's the probability of that? It's not 100%.

If privacy is important, assume compromise and work around it, sounds like the more secure strategy.


> It's easy for them to collect Facebook friends and Google searches, not Signal messages and DuckDuckGo queries made in an incognito window.

It's entirely unclear that it's difficult for them to collect duckduckgo searches (Why assume they wouldn't collect the info from duckduckgo?). Specific signal messages might be harder, but even signal collects and stores sensitive info (like a list of your contacts) forever in the cloud and in ways that could be easy for the government to get their hands on. Cell phones are basically designed to leak your info like a sieve and allow google/apple to do whatever they want (or are ordered to do) at any time without notice to users anyway, so anything done on them is probably capable of being observed and/or recorded.

You're right that most of the time it's probably not worth the effort to go after everyone for everything, but as long as they're collecting and keeping the data they can always decide to spend a little more effort if you become a problem for them.

I agree, people should take whatever steps you can to protect themselves whenever you can, since it can only help, but man, "Take care to never become inconvenient to those in power and hope they are incompetent when they finally come for you" is sure no comfort.


1. It's completely possible to treat your phone as an insecure device. Maybe I'm naive, but I think it's possible to run a daily Linux system with a reasonable assumption of privacy.

2. When you act as if you are being monitored and judged for your words/actions, you begin to self govern them to be more acceptable to the presumed omnipresent agent. Sometimes the fear of being surveilled is as powerful as actual surveillance.


> 1. It's completely possible to treat your phone as an insecure device. Maybe I'm naive, but I think it's possible to run a daily Linux system with a reasonable assumption of privacy.

Your computer is running several operating systems under ring 0 that Linux has no idea about, same goes with many components and peripherals. Those operating systems have direct memory access.


But not if we assume compromise.

How would you hide in plain sight? That is the question.

Bruce Lee said: be water. But maybe you need to: Be Hamlet


Stab the guy behind the curtain ?


Hahah, that's an unexpected connection that's funny!

No, I was thinking more, be a dissembler.


Talk to skulls.


So if everything you do is monitored, how do you achieve privacy in such a world?

I might put a physical paper notebook in a reporters pocket then meet with them and buy them a coffee or tea. Or I might give them a USB drive with a self-decrypting file and instructions for how to use it securely.

Or if I am feeling silly I might borrow a few hundred digital billboards and just broadcast the data to everyone and let the public sort it out. FoghornBlowing?


This, given these NSA programs have had 10 years to evolve and expand, and that the NSA can easily get access to effectively the entire planets’ mobile devices by showing up to just two American companies’ HQs with guns and gag orders, it seems almost a certainty that they’ll have OS-level access. So I’d highly doubt any standard mobile device is NSA-safe.

In terms of dimensionality, I actually do not think it would physically be possible for the NSA to warehouse all the raw data they could Hoover (haha get it) up, so that might be a bit comforting. And certainly whatever data they do Hoover up will mostly never be directly seen by a human due to physical constraints on eyeball time available to spy vs produce content. That yields one answer to your question which is to just not attract enough attention they decide to turn on full logging and comb through your life


AI can probably drastically reduce the time it requires to go through a massive trove of data.


> So if everything you do is monitored, how do you achieve privacy in such a world? That is the question, I think.

Proposals that suggests users get better at managing their own security are doomed. Most people don't understand the absolute insecurity of their door locks, let alone the state of their digital devices.

One possible answer is noise. There should be "digital noise generators" that create fake digital fingerprints for you everywhere. The goal isn't to make the landscape more pristine, it's to make it so "dirty" that it has no value to anyone anymore.


You are talking about targeted surveillance rather than mass surveillance. That is not the purpose of this guide and similar others. There are advanced guides for people in that threat model.


That's a good point, but I think the trend over time is, what was a targeted tool, is now mass.


An important part of the problem is that super complex software and hardware stacks are required today for even basic tasks. This limits customer's chioce, essentially forcing customer to use these bloated, insecure, obscure products.

Even browsing the plain text Hacker News forum requires a web browser, so complex that only few companies in the world can produce it. And runs on super complex OS.

I wish we had something like "basic computing / commnication device" specification. Simple, limited and transparent, that everyone can produce. With small software, That would allow to exchange messages and browse information online. Not all data formats, but a limited set of formats, good enough for basic communications.

Better a frozen spec, not a moving target. (Or a very careful evolution, with very rare release of new versions)

Good publishers, web sites, etc, could test their systems against the "basic comp / comm device".


> Even browsing the plain text Hacker News forum requires a web browser, so complex that only few companies in the world can produce it.

This not take anything out of your point, but HN can be browsed with simpler browsers like lynx, w3m, Ladybird or NetSurf, which are all written by a small set of people.

(they do rely on quite complex operating systems though)


Maybe the main lock-in is in drivers for proprietary hardware. Platform software like bare bones Linux / Android and Firefox is probably not the root of the lock-in.

If so, maybe it's better to think of two devices: trusted device, and fancy device.

The trusted device is a simple, low power and cheap device, maybe without even a camera. Just touchscreen, wi-fi and mobile internet. Fully open drivers, and hardware spec. Running bare bones Linux / Android, Firefox. Easy to root and reinstall the full software stack. There can be an open specification for at, as a "basic comp / comm device": 1 GB memory, CPU of certain performance, etc.

The fancy device is a cutting edge proprietary flagman device. Great power, but risk of tracking.

Every user can have both types of devices.


Is it legal to have private conversations discussing actual plans for acts of terrorism?

I assume you have to pierce some veil of reality, make a purchase, buy a ticket, etc. before it becomes a crime.

My point is if we can make surveillance costly by filling the airwaves with false positives that are just a group of bots plotting a terrorist act? I assume that is legal to do.

Edit - ok, so it definitely seems like this is not clever at all and almost certainly a crime. Don't do this!


No, that would likely constitute criminal conspiracy, even if you have no intent to commit it.

https://leginfo.legislature.ca.gov/faces/codes_displaySectio...


> (2) Falsely and maliciously to indict another for any crime, or to procure another to be charged or arrested for any crime.

So this means police informants in connection to the police are also committing a crime? Far too often people with recorded criminal activities are baited into getting another person caught for a more severe crime like terrorism, in exchange of being let go.


>So this means police informants in connection to the police are also committing a crime?

It says "falsely and maliciously".

So, if what say is true, it's not a crime.

If what they say is false but they believe it to be true, it's not done "maliciously", so it's not a crime.

If what they say is false and they know it, yeah, it is a crime.

But if the police and court believes them, or if it's the police itself that pressured them to point their fingers to some person they wanted to get, then it doesn't matter whether it's a crime or not, as it wont be prosecuted, and the police not only doesn't care, but explicitly wants the false testimony.


Ok. So it is regards to a false prosecution or to frame someone else for a crime. That is not the same as baiting someone to commit a crime.


> Is it legal to have private conversations discussing actual plans for acts of terrorism?

Not in most countries, no.


Not legal.

The problem with conspiratory talk is that while one person may fully not intend on action, it could inspire and/or manipulate others into committing acts. The blame is shared on all for conspiring and creating that environment where acts can emerge.


What if an unhinged language model generates all this noise talking to other language models, with no humans involved at all? The only human involvement would be an instruction to start spouting some believable bullshit on controversial topics, plus granting access to some private messaging tools and providing a contact list of other language models to talk to.


If the human did this for "plausible deniability" to avoid being persecuted, they shouldn't bet on it.

If they can get them, they will. The law is more of a technicallity for such cases.


> spouting some believable bullshit on controversial topics

So 80% of reddit and twitter is illegal conspiracy plotting/planning?


we have a dearth of FPS games [aka combat sims], it would be easy to include terroristic operations in this type of product.


There's great sketch by the Whitest Kids U'Know on the legality of such statements. https://youtu.be/gmiKenqLVAU


Bachs brandenberg concerto #3 is always a nice choice. One of my faves


In general, you can discuss such ideas but agreeing to perform them brings you up against the line of committing crime. In the US, you have to commit some material act in furtherance of the conspiracy, but that can be almost anything.

For example, if you and your friends agree to rob a bank and decide you'll need gloves, ski masks, and some kind of weapon, buying 6 pairs of rubber gloves at the store the next day would qualify as a material act. You don't need to acquire all the expected tools or go anywhere near the bank you discussed robbing.


I've read news stories of people caught, charged and everything, just for discussing those things.

So, no actual act is necessary.


Only if you don't get caught; plenty of schools have been evacuated because people mentioned comitting a crime without actually intending to execute it.

That said, flooding the systems with false positives is definitely possible, but it would be used as a cover for actual terrorist attacks.


>Only if you don't get caught;

Well, that's true for any crime tho, so doesn't answer the parent's question.


you seem to be describing "swatting"


Sometimes it’s “bomb threat”. I got a free day my last year of high school because someone claimed to have set them up the bomb.



I'm pretty sure it's illegal unless you're an intelligence operator trying to entrap people, in which case it's your job


it gets quickly less fun when all your family is woken up in the middle of the night by a SWAT team, your kids are yelled at, your equipment is seized and your partner ask for divorce.


Was feeling concerned when I read this:

> Canonical’s Ubuntu is not recommended by PRISM Break because it contains Amazon ads and data leaks by default

https://www.eff.org/deeplinks/2012/10/privacy-ubuntu-1210-am...

I clicked the link and it was from Ubuntu 12.04. This is something more than 10 years ago and has been unmaintained for years. It becomes hard to take this seriously when they make it seem like it is an ongoing issue (or do not link to a newer article if it is).


You are free to step in and help :)


I was surprised to find Authy in the "Avoid" column of the 2FA apps in Android. Anybody knows why? I prefer something open source like Aegis that I can backup myself but didn't hear anything bad about Authy in particular.



That makes it clear. Authy is an unacceptable piece of software.


Dammit, I've got a lot to migrate then. Another time-consuming project for the sake my damn'd ideals.

Authy was my escape from Google Auth.


Relatedly, does anyone know about 2fas.com ? They have a very nice app and a lot of installs but it's unclear how well vetted the oss is.


Yeah, kind of surprising. I guess it's a private company that manages your 2FA code backups, and they could theoretically lock you out.

I avoid Authy for a different reason: after upgrading phones, my backup password (which is 100% correct, trust me) is not unlocking my archive. I switched over to iCloud Keychain and will never look back.


I use google authenticator. I backup the QR code in the TrueCrypt vault when I add a new account to the google auth. I am not sure how secure it is, but I am very scared of losing access to google authenticator.


This seems great until you dig into some of the recommendations. A tool to save webpages is not an alternative to a news reader. A dynamic DNS service is a not an alternative to Google public DNS, etc, etc

I can't see the this making any kind of dent on the average person with these kinds of recommendations.


Word, I was a bit surprised by the "email" section as well. As a better alternative to Gmail, I would have expected to see e.g. protonmail or fastmail, but instead saw... thunderbird, an email client? Which doesn't make a lot of sense


It makes plenty of sense.

On https://prism-break.org/en/all/#email, they state "For more email providers, take a look at Privacy-Conscious Email Services. Please decide for yourself whether if you trust them with your data. For more discussion about safe email providers, please see issue #461.".

They even state that Thunderbird is a "Extensible, cross-platform email client.". The implied idea being to use Thunderbird to access a "Privacy-Conscious Email Service".

I use Gmail as an email client more than than I use it as an email provider because it has an External Accounts function. I apply Google's "App Script" system to my email to do things that you could do in Outlook's full-fat client or maybe in Thunderbird with some extensions.


If it is actually a malicious site trying to herd people toward exploitable behaviors it'd be following the Nigerian Scammer tactic of pre-screening by allowing simple errors to scare off more savvy inquiries.

This would go along with the rather crude emotional appeal.

That said, it hardly seems an efficient way to exploit people… though there are useful points. If you can get somebody credulous to use something that's compromised, and you're acting like a baleen whale and accumulating whole populations of credulous government-suspicious folks whom you've steered towards some mechanism where YOU can surveil them, that's got to have some usefulness.

People absolutely don't take into account the effectiveness of loosely manipulating entire populations in selective ways. You never need to select an individual and 'make' them take any action at all. You only have to cultivate the conditions for the outcome you want. Facebook might have discovered this first, but the idea sure caught on quick.


Maybe you should raise these concerns on https://gitlab.com/prism-break/prism-break/-/issues ?


I started going through the list and found several "wait, why is this to be avoided?" mentions. I started looking around for an explanation on their site and can't find anything.

There doesn't appear to be any clear explanation or rationale. There is however the every unhelpful libertarian mantra "... do your own research ...". Whenever I hear those words uttered I immediate question the legitimacy of the source.

Hiding your research (or lack of) and telling people to do their own is a manipulation. It's telling people to either take you at your word or invest a lot of time and energy into research which might yield a similar conclusion.

Research is meaningless unless it's documented and shared so others can evaluate it.


> Hiding your research (or lack of) and telling people to do their own is a manipulation

Yep. And even worse, since those people are also telling you what conclusion they want you to reach, they're encouraging people to engage in the illusion of research (starting with a conclusion and looking for confirming data points) rather than real research.


Getting regular folk, myself included, off of these popular platforms - especially Gmail isn't economical for most people. Just changing your email address and migrating to another provider isn't an easy sell.

Whilst the motivation of this project is commendable it's not going to reach the volume of folks needed to make a difference.


Why? Signing up for FastMail or Migadu and redirecting your mail is very easy. If you use a custom domain, you don’t even have to change your address when you migrate to a new provider.


> Signing up for FastMail or Migadu and redirecting your mail is very easy...

> If you use a custom domain...

I'd suggest trying to talk the average American into doing that. You'd have to be quite out of touch with everyday people to think this is a battle you can win.


I spent several days evaluating mail providers and migrating. It is work.


> by encrypting your communications and ending your reliance on proprietary services.

Well, unfortunately, you can't encrypt your location and pretty much every mobile phone is sending detailed GPS, accelerometer, barometer, WiFi, and other sensor data back to the mothership multiple times an hour.


You fundamentally cannot have location privacy with cell service. A cell tower will provide service to a limited physical region. When someone dials your number, the telecomm company needs to know which tower to route your call to. If telecomms didn't know where you were then cell service would not work. Sure, WiFi/GPS can provide more detailed location information and are commonly sent into the cloud and this is a problem too.


This is technically easy to fix on any Android or Linux phone. Willingness by people to make the changes is the challenge.


Google Play Services is not easy to rip out, and it installs itself as a "better" location service provider for the device, meaning that app requests for location go through it. And it can and does upload "anonymized"[1] data of all these types constantly as part of its normal operations. You can withhold consent to uploading "anonymized" data by paying careful attention to click-through agreements[2] and explicitly turning off "high location accuracy"[3] in your Android settings.

[1] The technical details of how this data are anonymized, nor how it is analyzed and used to "improve products" are not public.

[2] The implications of each click-through agreement are, as usual, non-obvious.

[3] The name of this mechanism keeps changing and it is harder and harder to find and disable.


Within a few minutes I can deactivate Google Play Services on an Android phone using ADB. The universal android debloater available on Github makes it easier. The better solution is a custom OS forked from Android, but what is possible depends on the device.


Disabling Google Play Services isn't the problem. The problem is that a huge number of apps will not run without Google Play Services, because Google puts all their fun/useful APIs and toolkits in it so they can be closed-source.


Yes if you are dependent on Google or Google integrated apps, simply deactivating it on a standard OS may not be a fit option for you. Graphene has built shims to make most apps work, but you might have a problem with another OS. I deleted my Google account many years ago and have not had Google Play on my devices since then. I use very few proprietary apps anyways.


GrapheneOS has no Google Play Services. OpenStreetMap works great without it. Google Maps in the mobile Tor browser works well too.



Threat models vary wildly. Someone's location may not be a consideration.


Thanks to this article I learned about Nextcloud[0], which at first glance looks like a really nice self-hostable alternative to the Google Suite.

[0]https://nextcloud.com


And Dropbox.


Should've add (2021) at the end because the site itself hasn't been updated in years.


That would imply it's out of date, which it apparently isn't. The website itself doesn't need updating if it stays relevant.


Plenty is out of date on it.

They also still link to https://prxbx.com/email/ from https://prism-break.org/en/all/#email, which doesn't consider https://techcrunch.com/2021/09/06/protonmail-logged-ip-addre...


The only meaningful use case for privacy tools is to use them to organize to create enough influence that you can reduce the need for privacy tools. If you aren't doing that, the tools are just tolerated by govts because they neutralize your resistance and explicitly enable mass surveillance, imo.

How many Signal users are there, and why aren't there enough of us to drive the political agenda? One big problem with most privacy tools is they don't name their threat actor (it's your own governments), and using the tools doesn't translate to a vote for anyone who will do something about the problem. On top of it all, installing these tools acts as a reliable political metric for popular intelligence community approval.


I like to use Thunderbird not just for e-mails, but also for RSS. When using the global inbox, you can have emails and RSS in the same list without noticing the difference.

I don't have many RSS sources (11) and some have news just every other month. That way I don't have to open a separate program and when something comes up, I see it right when I read my emails.

Did you know that you can use RSS to get notified when someone writes a reply to your HN comment? I love it: https://hnrss.github.io/#reply-feeds


It has been clearly reported, but in case folks are not aware: PRISM is not a system for deep persistent access into tech platforms, it is the internal NSA designation (code name) for data sourced via FISA court orders. The FBI actually secures the court order and then requests the data.

https://en.m.wikipedia.org/wiki/PRISM

If a company stores data about you, it is possible it could be subject to a FISA request. Data which is end-to-end encrypted would still be provided if it satisfies the criteria in the FISA court order. But it would be up to the NSA to try to break the encryption. Metadata might or might not be encrypted.


It is important to mention that FISA requests have something like a 98% approval rate and have a history of being overly broad allowing for dragnet data collection. So PRISM does still allow for very wide sweeping data access from tech companies. But yes, end-to-end encryption is one of the few things that can help with this.


That's.... not as bad as originally advertised, is it?


I've been a proponent of fuzzing - having systems that do noisy inauthentic engagement that is statistically indistinguishable.

Essentially it's to give an intolerable SNR to this scraping where they have to discard their metrics as useless


For a better and more up-to-date list of alternatives, see https://www.privacyguides.org/en/


While I'd definitely want a huge win for privacy, the current (that we need to avoid) suite of tools is extremely convenient (especially the collab/social ones) and are affected by network effect.

We should be aiming for a solution that is private while also convenient as the centralized ones. Otherwise even if we (HN audience) switch, many others won't and only a niche set of users will be using the private technologies and services.


> Otherwise even if we (HN audience) switch

This is a problem. Even the HN audience seems to struggle greatly in choosing non-proprietary and privacy friendly solutions. While the amount of privacy advocates are certainly greater here than in many other places, the general sentiment I get from reading a lot of these threads is that "If you have nothing to fear, you have nothing to hide".

Why do you think that is? Certainly a community like this shouldn't be bothered by the slight obstacles you would be challenged with.


> the general sentiment I get from reading a lot of these threads is that "If you have nothing to fear, you have nothing to hide".

> Why do you think that is?

I think it's because a lot of people think that there's nothing that can be done to change the situation, and so they adopt that mental stance in order to be OK with it. Whether or not that stance is correct isn't important. It's an emotional "safe space".


I wonder how useful even this list actually is. Famously Tucker Carlson was being spied on by the NSA through his signal app, and while I don't trust them to be able to figure out exactly what the point of entry was, it does imply without regular whistleblowers from throughout the NSA/etc, we won't know exactly what their capabilities are and I'm not sure how meaningful a list like this can be.


Famously? I've never heard of that. Is that even true?


He meant to say "Famously Tucker Carlson claimed that the NSA spied on him". Given that the man is a bastion of journalistic and personal credibility I can't see any reason not to believe him /s


Tucker made the claim that an internal USG whistleblower told him that his communications were being monitored by the NSA. Tucker stated that he sought the counsel of a Senator, and that the Senator told him that he should go public with the information. Tucker then discussed the claims on his television show. The NSA issued a statement saying that Tucker was not a surveillance target. Later, there was information that Tucker had been setting up an interview with Putin, and that those communications were intercepted, and Tucker's name was unmasked (when a U.S. citizen has communications picked up by the NSA, their name is redacted as part of the normal intelligence reporting, and it requires a high level official to request to see the actual name of the U.S. person). Subsequently, the NSA's internal watchdog began an investigation into whether Tucker was improperly targeted for surveillance. After investigating itself, the NSA cleared itself of any wrongdoing.


That sounds like a far cry from Carlson being spied on. More like Putin was being spied on, and Carlson walked into Putin's surveillance aura.


"If you're on Windows click here". OS > Avoid: Windows.

I get it, but not very helpful. The premise of making it "uneconomical" for a nation-state to perform mass surveillance is a bit naive; at best we can make it more expensive for our own governments to perform, which is backwards in IMO. We should make it cheap, efficient and easy to get too much garbage data.


[Privacy guides](https://www.privacyguides.org/en/) is also a great site for this. Prism-break updates fairly slow unlike privacy guides. However, PG leans heavily into usability and commercial software, recommanding questionable software including 1password. Also, PG prefers updating system (so mostly non-ubuntu based) but including opensuse and arch, which has questionable risk from rapid upstream update. Overall, both sites are great info. Also they promote Brave way too much, stick to Firefox/Librewolf/Mull. That's a bad rec.


What's wrong with 1Password? I've seen several famous cyber security pros recommend 1password (like Troy Hunt).


I think most segments against online password companies is that they get hacked so often. The most practical problem is having to switch all your passwords around after such a leak occurs, which seems to be more and more permanent these days. Contrary to popular belief, the best reason to store passwords offline is actually convenience, so that you don't have to change them so often (your single password dump is not a target, but all people's data is).


1Password doesn't have my passwords, they exist in an encrypted vault on Dropbox which is itself encrypted. The whole thing is extremely secure.


Which version are you currently using? I used to have the same setup back in the days when 1Password was only a standalone app, before it became a SaaS as it is now.

Can you please explain how you have organized your current setup?


Hmm, seems that version 8 and above discontinued Dropbox sync. I haven't updated my app in ages so I still have the dropbox setup.


My guess would be that it is because it is not open source, but I am surprised that there is no mention of BitWarden.


"Avoid PayPal, use Bitcoin." Wish it were that simple.


> Avoid PayPal, use Bitcoin.

So instead of the government being able to spy on you, you want the government and anyone else capable of monitoring the blockchain to spy on you? That seems worse on all fronts.


It still seems a lot harder to track if you don't reuse addresses. The same entry did also recommend Monero. Meanwhile I purchased brandy with a credit card for the first time, in-person at a small liquor store, and immediately started seeing web ads for more brandy.


I used to semi-joke the real way to break these systems is to have multiple AIs on everyone’s phones constantly talking to each other. You do this over encrypted chat and tag if it’s real or not. Only display the real ones to the users.

Then when you send a message it can be hidden. And it becomes too expensive to review.


I think pen-and-paper one-time pads are an underestimated tool for private communication. Granted they are cumbersome and limited, but they provide almost perfect secrecy and bypass issues of compromised computers completely. And with some basic steganography (section h in the guide below has a good example), you can pretty easily hide when / who you're sending a message to. 'The Complete Guide to Secure Communications with the One Time Pad Cipher' is a really good resource: https://www.amrron.com/wp-content/uploads/2015/05/one_time_p...


This has big "Society getting you down? Just go live in a cave on a mountaintop" energy.

It certainly a choice an individual can make. But it will have about as much societal impact as domestic recycling has on global warming. Especially since we're talking about internet communications technologies here... The alternative to using Discord for most people these days is not corresponding with the people they need to correspond with.


Sure thing, next time I'll use Thunderbird instead of Gmail, and pay with Monero instead of Paypal. I will also use Riseup instead of Google Docs.


Who is Peng Zhong, and why should I trust their curated list of 0days err Safe bets?

Also btw we should put 2021 in the title because it hasnt been updated since.


The suggestions here are not great. For example file syncing has no mention of syncthing and recommends something I’ve never heard of.


> For example file syncing has no mention of syncthing and recommends something I’ve never heard of.

It does mention Syncthing.

From the site:

> File Storage & Sync

> Prefer

> EteSync

> Encrypted calendar, contacts and tasks sync.

> Syncthing

> Direct file sync between devices.


The site hasn't been updated since 2021-08-02 per the footer, and probably sparsely before that. I think this site was most popular around when Snowden did the leaks and hasn't had as much hype since then.

A more modern alternative is https://www.privacytools.io/ but I haven't checked it in a while and can't vouch for the current contents.


privacytools had a hostile takeover by its long absent domain owner and now pushes several crypto services.

The previous maintainers created and moved to: https://www.privacyguides.org/en/


Saw wallabag in there. Unfortunately, it's a paid service unless you self-host. I just finished hosting my own wallabag instance today!

read.fahads.net

Feel free to register. Though you won't get an activation mail, I would be happy to activate your accounts manually. Though you shouldn't probably use it for anything too serious, since I'm not an expert sysadmin.


I wish it would expand on some of its recommendations. It says to avoid Authy, but doesn't give any reasons. Is this just a FOSS absolutist site? That doesn't really seem to mesh with the title for me, I was expecting to see some actual info about curbing data collection.


Same, I want to understand why I'm avoiding any of these.


Wouldn't it be more viable to just curate multiple personal identities? It's not illegal and as long as they don't need your SSN they won't care.

I've thought about doing this to have a pen name with a pseudo anonymous identity but I also have burner emails to avoid spam.


My concern is leaking through browser sessions. If browser adware finds traces of two identities in a session, your secret is out. If browser fingerprinting works (it does), your secret is out.


Back in my old network engineering job at MCI, there was a secret Prism project. Rumors claimed it was to optically split the fiber without introducing delay for eavesdropping.

I'm sure naming is related and glad to see the initiative.


It would be helpful to have some description regarding why some entries made the naughty list. For example, is there evidence that iOS sends data to PRISM? Has any analysis shown that Safari leaks any more data than Firefox?


It probably goes along the lines of:

"It is impossible to download and examine iOS's source code, which means that it is impossible to prove that iOS is not spyware. Any program which does not make its source code available is potential spyware."

Which I agree with. I'm not going to trust and put as much personal data as a smartphone usually contains into a proprietary black box.


Yeah, this bit:

> "Apple iOS devices are affected by PRISM. Even using the software tools we recommend here, your privacy may be compromised by iOS itself. The operating system of any device can unfortunately lever out any privacy protection that a program tries to offer you."

...made me conclude that these people are idiots. You don't need to activate iCloud on an iPhone and you can use standard stuff like IMAP and WebDAV to sync contacts and calendars etc. There's also a huge list of telemetry controls you can shut off in the OS.

Not to mention they have the best physical and OS security of any mobile device.

Suggesting that a small homebrew Android ROM, maintained by anonymous individuals, which hasn't seen any security updates in almost a year, is comparable in terms of end-user privacy is ludicrous.


I didn't use iCloud yet Apple Customer Support was able to directly send a "remote access request" to my iPhone, years ago, which I simply had to press "accept" and they were able to remote-control my phone and see everything on screen. There's no reason the OS can't allow that exact same comprehensive remote access, without asking my permission. There's also no reason Apple can't surreptitiously introduce new back doors with any given iOS update -- especially considering the new "urgent update" functionality they recently introduced.


I finally realize that all those weird face tattoos in futuristic scenarios were to throw off facial recognition. Who here is starting an AI defeating temporary face tattoo business?


tl;dr: This is just your typical list of "privacy focused" and "self hosted" alternatives (e.g. use Signal not Facebook Messenger), with some attention-grabbing framing.

Some of the recommendations are pretty suspect, too: how is using Thunderbird for email supposed to "opt you out of PRISM and XKeyscore"?


Do you realize that page was established in 2013?

If the reference is keeping all your messages, and potentially your PGP keys, in "cloud" storage at a PRISM provider it's not particularly hard to understand some ways in which using Thunderbird instead is supposed to help. It's a fair point it's not a particularly satisfying mitigation though.


> Do you realize that page was established in 2013?

No, but that makes sense. The framing would have been much more apt back then than it is now, with the Snowden stuff being fresh.

> If the reference is keeping all your messages, and potentially your PGP keys, in "cloud" storage at a PRISM provider it's not particularly hard to understand some ways in which using Thunderbird instead is supposed to help. It's a fair point it's not a particularly satisfying mitigation though.

The reference is just "instead of Gmail, use Thunderbird" (e.g. https://prism-break.org/en/subcategories/macos-email/). They don't mention PGP in that section at all, though there's a later one about "Email Addons, which does, which is easy to miss (e.g. skipping b/c you don't already use addons).

Their (broken HTML) recommendation to run your own email email server is also suspect, because it's a bad tradeoff. Unless you want a second, unpaid job as email server administrator (with a pager!), you're "protecting" yourself against a rare hypothetical threat (government surveillance) by making yourself vulnerable to a much more common one (run of the mill hackers).

Realistically, they probably should have just said something along the lines of "email surveillance is practically unavoidable," so don't use it for anything you don't want monitored. PGP failed because it's too hard to use, so no one uses it, and any reasonable use of email will mainly involve exchanging messages with some "monitored provider's" servers.


I guess using Thunderbird would get many people away from relying exclusively on the web interface of gmail. Then the next step would be to make an e-mail account at another e-mail provider. Later maybe switch away from gmail entirely.


> how is using Thunderbird for email supposed to "opt you out of PRISM and XKeyscore"?

The mail client may help improve privacy if you configure it to erase data in the server as it is downloaded to the client (POP), instead of letting it stay in the server for a indefinite amount of time (IMAP). If people are going to break into your provider, a empty mailbox would limit compromise.


Almost nothing on this list is actually positive for security, and most of the applications provided are not actually substitutes. Good luck replacing Discord with Signal.


Agreed with the second part, but what do you mean by "Almost nothing on this list is actually positive for security"?


Yeah an actual substitute to Discord would be matrix.org, not Signal.


Profiling people requires you to provide clean data.

Just do not provide clean data. Search for random shit occassionally so that the entire profiling gets poisoned with fake data points.


We need more awareness, normies should understand we can exchange a slight inconvenience + less features some times for a middle finger to mass surveillance.


Ok, so according to this, to one should EtherCalc web service for productivity. Why? What are the guarantees here that no surveilance is taking place.


Recommending people to use Monero instead of Paypal? That is ridiculous for several reasons and makes me question their other recommendations.


what are the reasons?


Email addons: Enigmail is no longer an add-on for Thunderbird; it's built-in (and has been for years).


I think OSS developers should adopt ethical licenses. Licenses that specify you can't use the software for a variety of use cases, such as violating human rights, or mass surveillance.

Oppressors will still buy or make software for those purposes, but we don't have to hand them the tools they use to oppress us.


That is against the spirit of open source. People and corporations will be wary of using such software, since someone someday may define their use as "unethical". And the "oppressors" don't care about following your license anyway.


My goodness! We mustn't make it hard for corporations to use the software! And if criminals will get guns one way or another, we might as well hand them out like candy, right?

The purpose of an ethical source license is not to be a fool-proof fix for all unethical behavior. The purpose is to send a message that human rights violations are not okay, and to make it harder for unethical corporations/users to defend their actions.

If the reason someone doesn't want to use ethical software is "someone might find my actions unethical", either they're morons or they're unethical. Ethics aren't t-shirt slogans. Human rights don't constantly change with the wind. Either you are abusing people or you aren't. If you are abusing people, then you can, you know, stop doing that. Or go find different software.

Finally, licenses are contracts which are legally enforceable. If someone breaks the license, that carries the force of law. You can absolutely stop someone from using the software if they break its terms, even internationally.


Right, because people who violate human rights are going to adhere to a software license term. I can see it now, while beating the crap out of someone the panic that they'll experience when they realize the software they are trying to use has an ethical license.


Even mass surveillance of people who are Dangerous To Our Democracy™??


Too late. Not to be a downer but this ship has sailed. No matter the cost governments and big business will unite in using mass surveillance to monitor and control the population. In a sense its a new feudal system enforced with complete surveillance.


> No matter the cost governments and big business will unite in using mass surveillance to monitor and control the population.

The cost has almost always been ~0 for gov officials performing unnecessary surveillance.

I trace this low cost back to news orgs. Most editors & journalists opt out of honoring their extra constitutional protections because they don't serve as an adversary to the powerful. Instead they favor publishing sportsball or celebs or parroting gov/corp/leo pr without any analysis, etc.

We don't know how officials will behave if they have to pay a persistent, meaningful cost for surveilling us. We've never tried it.


> We don't know how officials will behave if they have to pay a persistent, meaningful cost for surveilling us. We've never tried it.

To try that, step one would be making people aware of what's even happening. As you say, news orgs are failing us all.

Assange and Snowden took Step One toward that end... And were made an international example of. The institutions and news orgs who ought to have been their main support failed, and even turned on them in most cases.


What, Slack just gets a pass?


so they can just raise taxes?


[flagged]


Is this AI generated? The first paragraph sounds sort of Chat-GPT-esque to me.


Considering it sounds nothing like their past comments, I'm guessing they're asking ChatGPT to rephrase their words.


Nice catch, the switch in comment style is very noticeable


The classifier considers the text to be unclear if it is AI-generated. Try it for yourself at: https://platform.openai.com/ai-text-classifier


same. turing test failed. it's over-wordified, too formal, message-light. how did it get to the top?


Wouldn't be surprised if GPT-4 learned it's style from higher quality HN comments.


Please don't post shallow dismissals. It ruins what this site is for.

Interesting that the message content is not what’s being adjudicated by down votes here. If a mod says it: all good. If a cocommenter says it: very bad.


Can we please lifetime ban users posting AI drivel?

It's 100% noise, and it's going to steal our time and isolate us from everyone.


I suspect most of us would like that, but it doesn't seem feasible. Detection of AI text is incredibly difficult, and false positives would be a huge stain on the user base. Can you imagine posting a thoughtful comment, then having your user banned for a false positive calling you out as an AI? I would find that quite offensive and I don't know if anything could be done to reverse the negative effect it would have on me in regards to how I view HN.


Posting 100% AI generated content should be against the rules. (Outside of exceptions where it is relevant.)

But where should the line be drawn when a user collaborated with an AI on a comment? As an english-as-a-second-language speaker, I've been for years using tools like Grammarly or Hemmingwayapp to improve my writing. I will gladly use a GPT-based proofreader/editor browser plugin eventually, why not?


I agree but the alternative is the the end of HN + the end of the rest of the open internet in a year or five.

When you soon will only meet bots that are trying to manipulate you or sell you something - the value for everyone goes to zero pretty quickly.

I'm not sure how this will be solved besides most people ditching the open internet and 100% engaging in tiny groups of people they already know the mental capacities of.

Christ, this really is the end of the "social internet" where you could find inspiration and new perspectives isn't it?


It makes me want to revisit The Web Of Trust[1], and apps like Keybase where users have a cryptographically verified social graph comprised entirely of people who were verified by another human that knows them. That whole idea goes directly against anonymity though, so maybe that will become a more pronounced way to split the internet: verifiable human identities, and anonymous bots and humans.

1. https://en.wikipedia.org/wiki/Web_of_trust


This is a solution against botnets, but not against humans who use AI to enhance/write their comments for them, like the ancestral poster was accused of doing.


Might well be. It's also an opportunity to study (meta-study?) the behavior of populations under these changes. It's a lot like an A-life experiment writ large, and played out in real life.


[Spider Crab] Silence, GPT!


I was under the impression that someone did the math a few years back on the US government making long-term/indefinitely-kept recordings of every phone call. Not every phone call for a calendar date, or for a city... but all of them, going forward, forever.

It was deemed expensive, but feasible given current pricing and technology. Especially when the cost would be amortized out over the next 15 or 20 years... it might even fit in a black ops slush fund budget.

Maybe I misunderstand, but the technical challenge has been lost. Only legislative obstacles are now possible, supposing they ever were.


And they recommend using a Google free Android phone to prevent surveillance. Ignoring the fact that Qualcomm based phones will still leak data.

https://www.nitrokey.com/news/2023/smartphones-popular-qualc...


Did you understand the article you just linked or do you just wanna throw shit on a bonfire? They pull GPS data and nothing else.


Yes because worrying about your location being tracked is silly and never used by government - especially in the case where they want to arrest everyone who was protesting in a given area.


Ah yes... the dystopian sci-fi argument without merit.


They are doing that today. The FBI asked cell phone providers for everyone who was around the capital January 6th.

https://freebeacon.com/latest-news/google-gave-fbi-location-...


Mass surveillance helped the UK Counter Terrorism Police identify Russian spies Ruslan Boshirov and Alexander Petrov in Salisbury investigation who were trying to kill a family of dissidents.

Granted, the website is dedicated to mass surveillance in the IT. But then think, generally speaking, is the mass surveillance on some reasonable level really so bad? It's helping identifying Russian soldiers who are committing war crimes and atrocities in Ukraine. It helps preserve the free and democratic society rather than creates a road to dystopia. Of course, I'm speaking of some reasonable levels, not of something like real-time client device scanning. It doesn't make any sense and it would simply not work.


It's a dangerous train of thought. You will end up like China where it's acceptable for the "greater good". And nobody will ever give the power back once they have it.


I agree it's dangerous but what are governments supposed to do?

In a world where individuals or very small groups of people are increasingly gaining the power to do potentially catastrophic damage using increasingly powerful technology, what are the actual alternatives? Trust?

How can society functionin going forwards without at least some oversight ?

Don't get me wrong, I don't want society to go this way, but I'm starting to see fewer and fewer options presented to Governments. I can see both sides of the story.

I wouldn't pretend I know the right answer, but I think we have to admit the world has changed quite a bit recently.


Yes, it really is so bad. People are unbelievably ignorant of history.

What happens when the state intelligence apparatus has the ability to perfectly surveil the population? Look no further than what the Stasi accomplished - it becomes trivially easy to discredit political opposition, journalists, business leaders, or any other person or group standing in the way of the powerful.

People split hairs about this kind of oversight, or that kind of oversight, but when the powerful are overseeing their own surveillance apparatus - using secret courts, secret warrants, and all manner of other methods for hiding the true scope of the surveillance - I do not believe it can be contained.


But the Stasi was a secret police in a totalitarian state, not a liberal democracy. Apples to oranges. Good faith actors in government, with intense oversight from elected officials, makes your concerns null and void.


Because government agencies have such a good track record of ensuring they only ever contain good faith actors and would never hide things from oversight, and such oversight would never be done by people willing to look away if it "hits the right people" or some nonsense like that.


A liberal democracy has the ever present risk of devolving into a totalitarian state (just look at the Weimar republic or Argentina in the 70s). We must hedge our risks, fight tooth and nail so it doesn't happen, but when it eventually happens, better not to leave a well oiled, powerful machine ready for the totalitarians to crush us.


It is ok as long as you are not the target :)

It is basically only ok if you can guarantee it’s users always use it in good faith which is virtually impossible.


Police being able to walk into anyone's home at will to search it would definetly lead to catching some criminal activity. The issue isn't "would this thing lead to catching some criminals". The issue is abuses, government overreach, and innocent civilians being targeted.

“It is better, so the Fourth Amendment teaches, that the guilty sometimes go free than the citizens be subject to easy arrest.” William Douglas, Associate Justice of the Supreme Court. The argument to give up your rights is always initially used to target the worst of the worst. Its always terrorists, spies, child murderers, etc. Of course we shouldn't be slowed down by due process when it is for this child murderer. Yet it is then used against those least likely to be able to defend themselves for easy wins. Russian spies first, then its minor crimes committed by immigrants.


Would it have been impossible to achieve the same goal with targeted surveillance instead?


You know what they say about the end justifying the means.


I take it you’re not a Florida woman with a miscarriage.


Nice straw man


Completely agree. When used properly and ethically in a democratic society, surveillance is an absolute net positive for society.


> When used properly and ethically in a democratic society

So, for the first five minutes.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: