Hacker News new | past | comments | ask | show | jobs | submit login
Google ordered to identify who watched certain YouTube videos (forbes.com/sites/thomasbrewster)
552 points by wut42 4 months ago | hide | past | favorite | 370 comments



There are different incidents here.

The first one where the police uploaded videos and wanted viewer information is absolutely egregious and makes me wonder how a court could authorize that.

The next one, which I didn’t fully understand, but appeared to be in response to a swatting incident where the culprit is believed to have watched a specific camera livestream and the police provided a lot of narrowing details (time period, certain other characteristics, etc) appears far more legitimate.


I don't understand how either of these are remotely constitutional. They sure aren't what is in the spirit.

They asked for information about a video watched 30k times. Supposing every person watched that video 10 times AND supposing the target was one of the viewers (it really isn't clear that this is true), that's 2999 people who have had their rights violated to search for one. I believe Blackstone has something to say about this[0]. Literally 30x Blackstone's ratio, who heavily influenced the founding fathers.

I don't think any of this appears legitimate.

Edit: Ops [0] https://en.wikipedia.org/wiki/Blackstone%27s_ratio


Cell phone tower data has been used for a decade now in pretty much the same way.

Did you happen to pass by a cell tower in a major city around the time a crime was committed? We all have.

Well, your IEMI was included in a cell tower dump. Probably dozens of times.

Did you happen to drive your car over any bridge in the Bay Area lately? Did a municipal vehicle pass you and catch your license plate with their ALPR camera?

Guess what? Your name went through a database of an LEO search if they wanted to find a perp for that time/location.

Privacy has been dead for a long time. The worst part is people don’t care.

The Snowden files changed nothing. If there was ever a point in history where people would have given up their cell phones for their civil liberties, that would have been the time to do it.


> Cell phone tower data has been used for a decade now in pretty much the same way.

I was mad then. I'm more mad now. Stop these arguments because it isn't like one implies the other. And who the fuck cares if someone wasn't but is now. What's the argument, that you're a hipster? That's not solving problems. I don't want to gatekeep people from joining the movement to protect rights. I don't care if they joined as a tin foil hat or just yesterday after having literally been complacent in these atrocities. If you're here now, that's what matters.

> Privacy has been dead for a long time. The worst part is people don’t care.

Bull, and bull.

There are plenty of people fighting back. I'm pretty sure me getting ads in languages I don't speaks is at least some good sign. Maybe I can't beat the NSA, sure, but can I beat mass surveillance? Can I beat 10%? 50%? 80%? 1% is better than 0% and privacy will die when we decide everything is binary.

People care. People are tired. People feel defeated. These are different things. If people didn't care Apple (and even Google) wouldn't advertise themselves as privacy conscious. Signal wouldn't exist and wouldn't have 50 million users. It's not time to lay down and give up.

> mingus88 36 minutes ago | parent | context | flag | on: Google Ordered to Identify Who Watched Certain You...

Cell phone tower data has been used for a decade now in pretty much the same way.

Did you happen to pass by a cell tower in a major city around the time a crime was committed? We all have.

Well, your IEMI was included in a cell tower dump. Probably dozens of times.

Did you happen to drive your car over any bridge in the Bay Area lately? Did a municipal vehicle pass you and catch your license plate with their ALPR camera?

Guess what? Your name went through a database of an LEO search if they wanted to find a perp for that time/location.

Privacy has been dead for a long time. The worst part is people don’t care.

> The Snowden files changed nothing.

They didn't change enough, but that isn't nothing.


> > The Snowden files changed nothing. >They didn't change enough, but that isn't nothing.

The biggest change IMHO was the entire industry got off their collective assets to finally move to HTTPS.


I wonder who's going to have to end up hiding out in a US-hostile part of the world for us to read this part of the cloudflare FAQ: https://developers.cloudflare.com/ssl/troubleshooting/faq/#w...


The world’s largest MITM


Lol, I'm a bit slow ... some USA TLA runs Cloudflare, right?


tin foil hat time, but who do you think the MITM is for?


Tech bros love it. And tailscale. And saas as a whole. Data sovereignty means you can’t be kind by the adtech industry so it’s not cool.


Calling out tailscale here is odd considering it's peer-to-peer and encrypted.


With keys controlled by a central entity


do you have a source for that?


Tailscale [0] says the private keys never leave the device.

“Security

Tailscale and WireGuard offer identical point-to-point traffic encryption.

Using Tailscale introduces a dependency on Tailscale’s security. Using WireGuard directly does not. It is important to note that a device’s private key never leaves the device and thus Tailscale cannot decrypt network traffic. Our client code is open source, so you can confirm that yourself.”

0. https://tailscale.com/compare/wireguard


That is true as far as it goes, but how does your node learn the public keys of the other nodes in your tailnet? My understanding is that they are provided by the coordination server, so you have to trust that the public key the coordination server gives you is actually the one for your peer device.

Tailnet lock helps mitigate this by requiring that node public keys are signed by a trusted signing node, but it isn't bulletproof.


Public key cryptography doesn’t work like that. If you were given wrong public keys you wouldn’t be able to connect to start with.


> Public key cryptography doesn’t work like that

Like what? I'm saying both sides of the connection would be given the wrong public keys by the coordination server. The private keys of which would be held by a MITM.


To add to that, they also provides Tailnet lock [0], which protects from the only way the coordination server can mess with the tailnets, by connecting unauthorized nodes.

[0] https://tailscale.com/kb/1226/tailnet-lock


Not sure what the issue is with Tailscale, especially since you can self-host Headscale server locally to get the same effect.


Headscale is fine. With tailscale they control the deployment of public keys to devices, and can thus deploy anything they want to.


Good to know.

Have they ever deployed anything they want to devices?


The direction you're heading in sounds very similar to the arguments that may have been made pre-Snowden about mass-surveillance.



a single encryption is for the stone age.

if [pecadillo] must remain secret when your nieghbour is investigated for [crime?] then encrypt at least twice, and obfusicate the original message


> The biggest change IMHO was the entire industry got off their collective assets to finally move to HTTPS.

And then promptly moved most things behind cloudflare, which is MITMing everything, undoing the benefit of HTTPS.

Remember "SSL added and removed here!"? Now it happens at cloudflare.


I thought this was driven by ISPs inserting their own ads in normal HTTP.


…no, it was definitely “HTTPS added/removed here”


Had nothing to do with Snowdon but with Google ranking algo changes. Google has a commercial interest of hindering competitors in the add brokering market from observing info on the wire.


It had everything to do with Snowden. Source: I was at Google at the time he started leaking.

Before Snowden encryption was something that was mostly seen as a way to protect login forms. People knew it'd be nice to use it for everything but there were difficult technical and capacity/budget problems in the way because SSL was slow.

After Snowden two things happened:

1. Encryption of everything became the companies top priority. Budget became unlimited, other projects were shelved, whole teams were staffed to solve the latency problems. Not only for Google's own public facing web servers but all internal traffic, and they began working explicitly on working out what it'd take to get the entire internet to be encrypted.

2. End-to-end encryption of messengers (a misnomer IMHO but that's what they call it) went from an obscure feature for privacy and crypto nerds to a top priority project for every consumer facing app that took itself seriously.

The result was a massive increase in the amount of traffic that was encrypted. Maybe that would have eventually happened anyway, but it would have been far, far slower without Edward.


You were at Google at the time, but your memory of the ordering of events is off. Google used HTTPS everywhere before Snowden.[1][2] HTTPS on just the login form protects the password to prevent a MITM from collecting it and using it on other websites, but it doesn't prevent someone from just taking the logged in cookie and reusing it on the same website. That was a known issue before Snowden, and Google had already addressed it. Many other websites, including Yahoo, didn't start using HTTPS everywhere until after Snowden.[3] I know because this was something I was interested in when using public WiFi points that were popping up at the time. I also remember when Facebook moved their homepage to HTTPS.[4] Previously, only the login form POSTed to an HTTPS endpoint, but that doesn't protect against the login form being modified by a MITM to have a different action for the MITM to get your password, rendering the whole thing useless.

What changed after Snowden was how Google encrypts traffic on its network, according to an article quoting you at the time.[5]

[1]https://gmail.googleblog.com/2010/01/default-https-access-fo...

[2]https://googleblog.blogspot.com/2011/10/making-search-more-s...

[3]https://www.zdnet.com/article/yahoo-finally-enables-https-en...

[4]https://techcrunch.com/2012/11/18/facebook-https/

[5]https://arstechnica.com/information-technology/2013/11/googl...


An important clarification is that the leaks about NSA snooping on Google motivated end-to-end encryption between all pairs of Google internal services. It was a technical marvel, every Stubby connection had mutual TLS without any extra code or configuration required. Non-Stubby traffic needed special security review because it had to reinvent much of the same.

People even got internal schwag shirts made of the iconic "SSL added and removed here" note [1]. It became part of the culture.

Over a decade later I still see most environments incur a lot of dev & ops overhead to get anywhere close to what Google got working completely transparently. The leak might have motivated the work, but the insight that it had to be automatic, foolproof, and universal is what made it so effective.

[1] https://blog.encrypt.me/2013/11/05/ssl-added-and-removed-her...


A minor quibble; iirc it was only connections that crossed datacenters that were encrypted. RPC connections within a cluster didn't need it, as the fiber taps were all done on the long distance fibers or at telco switching centers.

But otherwise you're totally right. I suspect the NSA got a nasty shock when the internal RPCs started becoming encrypted nearly overnight, just weeks after the "added and removed here" presentation. The fact that Google could roll out a change of that magnitude and at that speed, across the entire organization, would have been quite astonishing to them. And to think... all that work reverse engineering the internal protocols, burned in a matter of weeks.


According to the reporting at the time, the NSA has shut down the email metadata collection program, which was the only leaked NSA program that parsed data on those taps, back in 2011; so the reverse engineering work was burned by an interagency review two years prior to Google's cross-datacenter encryption work.


They were tapping replication traffic on a database that included login IP addresses. I remember it well because it was a database my team had put there.


I missed that leak. Any chance you have a link for me to fill in my gap?


Slide 5 (Serendipity - New protocols) in this presentation:

https://github.com/iamcryptoki/snowden-archive/blob/master/d...

It's heavily redacted but the parts that are visible show they were targeting BigTable replication traffic (BTI_TabletServer RPCs) for "kansas-gaia" (Gaia is their account system), specifically the gaia_permission_whitelist table which was one of the tables used for the login risk analysis. You can see the string "last_logins" in the dump.

Note that the NSA didn't fully understand what they were looking at. They thought it was some sort of authentication or authorization RPC, but it wasn't.

In order to detect suspicious logins, e.g. from a new country or from an IP that's unlikely to be logging in to accounts, the datacenters processing logins needed to have a history of recent logins for every account. Before around 2011 they didn't have this - such data existed but only in logs processing clusters. To do real time analytics required the data to be replicated with low latency between clusters. The NSA were delighted by this because real-time IP address info tied to account names is exactly what they wanted. They didn't have it previously because a login was processed within a cluster, and user-to-cluster traffic was protected by SSL. After the authentication was done inter-cluster traffic related to a user was done using opaque IDs and tokens. I know all about this because I initiated and ran the anti-hijacking project there in about 2010.

The pie chart on slide 6 shows how valuable this traffic was to them. "Google Authorization, Security Question" and "gaia // permission_whitelist" (which are references to the same system) are their top target by far, followed by "no content" (presumably that means failed captures or something). The rest is some junk like indexing traffic that wouldn't have been useful to them.

Fortunately the BT replication traffic was easy to encrypt, as all the infrastructure was there already. It just needed a massive devops and capacity planning effort to get it turned on for everything.


The first two links are about Gmail and personalized results in web search specifically. Even as late as 2011 SSL being activated for a product was treated as unusual enough to write blog posts about, and it was up to individual projects whether or not to activate it and how to trade off the latency costs.

You're right that I might be mis-remembering the ordering of things, but I'm pretty sure by the time Snowden came around the vast majority of traffic was still unencrypted. Bearing in mind that lot of Google's traffic was stuff you wouldn't necessarily think of, like YouTube Thumbnails, map tiles and Omaha pings (for software update). Web search and Gmail by that point made up a relatively small amount of it, albeit valuable. Look at how the Chrome updater does update checks and you'll discover it uses some weird custom protocol which exists purely because at the time it was designed Google was in a massive LB CPU capacity crunch caused by turning on SSL for as many services as possible. Omaha controlled the client so had the flexibility to do cryptographic offload and was pushed to do so, to free up capacity for other services.

> What changed after Snowden was how Google encrypts traffic on its network, according to an article quoting you at the time.[5]

That also changed and did so at enormous speed, but I'm pretty sure by June 2013 most external traffic still didn't have TLS applied. It looks like Facebook started going all-SSL just 8 months before Snowden.


I had completely forgotten about YouTube. I think it switched to https video serving post-Snowden, but I can't find the announcement.

Edit: Here it is. Only 25% of YouTube's traffic was encrypted at the start of 2014. https://web.archive.org/web/20160802000052/https://youtube-e...


Right, I remember (as an outsider to google) the push for https coming after Firesheep [1] and the google research on the actual CPU cost of https [2], both in 2010. Snowden's revelations came in 2013.

[1] https://en.m.wikipedia.org/wiki/Firesheep [2] https://www.imperialviolet.org/2010/06/25/overclocking-ssl.h...


That's nice and all, but the "why" is more important than the "what".

Google was driven not out of some panicked rush to protect user privacy, but to protect Google's collection and storage of user data.

Google has 10+ years of my email. It doesn't treat that like Fort Knox because it gives a shit about my privacy; it treats it like Fort Knox because it wants to use that for itself and provide services to others based off it.

You do know that Google was heavily seed-funded by the NSA, right?


Google might just be the biggest advocate of https out there, certainly (from my recollection) post Snowden. There has been a lot of progress made over the years.

https://transparencyreport.google.com/https/overview

https://transparencyreport.google.com/safer-email/overview - transmitting email with some form of encryption is probably a bigger and completely unseen problem that is similar


There was literally a PowerPoint slide in the released docs implying they had backdoored Google's internal servers.


>Stop these arguments because it isn't like one implies the other. And who the fuck cares if someone wasn't but is now. What's the argument, that you're a hipster?

That we are nothing in the ocean of people who don't care. Someone upended their entire life to whistleblow on the government doing it as hard proof and no one cares (from a statistical POV, not a "literally 100% of the population" way).

They cared more about the boston bombing the month prior, which while tragic is a statistical molecule compared to the impact of what Snowden revealed.

>There are plenty of people fighting back.

This can be a game of numbers, but it isn't. This can be a game of power, but it isn't. Not enough people are fighting back and not enough powerful people are fighting back.

>People care. People are tired. People feel defeated. These are different things

well it sounds like they gave up. Different words, samae results


The concept introduced by the Supreme Court regarding Pen register is consistent with all the examples you have given.

Anytime you willing share data with a 3rd party the law assumes you aren't keeping it private.

https://en.wikipedia.org/wiki/Pen_register

If you want to keep something private don't share it outside of your house.


Except that existing in modern society requires giving immense amounts of personal information for even basic transactions.


It's beyond absurd and desperately needs to be addressed. Too bad both the government and corporations stand to loose too much that I doubt it will be treated seriously.


I personally think that the Apple anti-trust is being pushed due to their privacy stance.

Apple looked at the pen register cases and realized the best position to be in as a third party is to not possess usable data.

The US case from my point of view is trying to fore Apple to share user data with third parties.


How would a successful antitrust verdict against Apple further the NSA's implicit dogma of "insecure by default"? Especially if it winds up breaking up Apple into many pieces. It's far easier for a centralized tech industry to bend the knee to the NSA than a distributed one.


>It's far easier for a centralized tech industry to bend the knee to the NSA than a distributed one.

I don't agree. NSA can hack/pressure smaller companies much easier than a giant like Apple.


easier but you get less data. There's thousands of small knees to get to bend to. More points of failures for public outings Centralizing it to one company makes everyone's lives easieer.


Furthermore the NSA/FBI/CIA want all their spying behavior to be secret. If you have to bend a lot of small knees then someone's going to fib before they get the data they want. And moving off a small company that's bent the knee is way easier than moving off FAANG, which can keep secrets[0] and has your balls locked in a vise.

[0] Because, among other things, the whole "Surprise and Delight" doctrine demands internal controls and secret-keeping discipline not that far off from an actual intelligence agency


Forcing Apple to hand over data to a third party for commercial reasons (not needing a warrant) is much simpler than whatever scenario you have worked out.


We all have choices to make. I avoid all sorts of things people consider indispensable.

2 examples are not having an amazon prime account and running my own mail server.


Given recent events, I don't think Amazon Prime is that necessary anymore.

Mail servers, sure. The big issue there is another annoying pseudo-monopoly issue where so many major email servers assume anything not from [major email server] is spam, so you may not even get to communicate properly. More sticks for the fire.


I'm anything but a major mail provider and I don't have any issues. I did have some hiccups around 2008 and had to implement DMARC-DKIM. I use strict delivery so my mail server must delivery all mail directly.

Occasionally people have a vanity domain email that bounces back to me. I have to search the headers for the actual email address and re-send.


IMHO the problem here is really transparency. There IMHO can be situation in which it could be reasonable. But the concrete cases might be questionable as we are probably not talking about capital crime.

In Berlin there used to be a notification system if you were subjected to cell surveillance in Berlin. It was recently stopped [0]. IMHO we need the same for all IP assignment or account lookups. The problem IMHO is that we, individualy, and particularly vulnerable groups like journalists and activists, might be subject to far more of such activities than we know.

[0] https://netzpolitik.org/2024/rolle-rueckwaerts-berlin-beende...


> notification system

More-generally, imagine if every citizen was entitled to a yearly report on all how many times law-enforcement received records containing their names or personally identifying information, except in cases that are formally unsolved and in-progress.

So a line item might be something like:

    {Ref ID}, {Date}, "All Youtube accounts that watched {Video Title}"


I don't think anyone is saying that rights can't be infringed upon for any reason. The issue is that there needs be sufficient reason. Is this sufficient reason? I think the action is sufficient reason were it specifically targeted at the individual under suspicion. But a dragnet is not. Those innocent people were not under suspicion and were not doing anything wrong or illegal.


There is a distinction I tend to make here.

If some person was able to pick me out from a lineup because they physically saw me then that wasn't private and privacy laws don't apply.

So for instance capturing my face on CCTV in a public place isn't a privacy violation, same with my license plate in a pulic place.

However what happens on my private property is a privacy violation if it is recorded without consent.

Certian information isn't private, and that being stored is fine. Where the line gets drawn is what's up for debate.

I surely would want my contact details and name saved by a company that I intend to do business with in either direction. However if they spam me with information I should be able to lodge an harrassment claim against them. It's not a privacy issue but a decency issue.


> However what happens on my private property is a privacy violation if it is recorded without consent.

And the biggest enablers of violation are things like ring doorbells and dashcams. There is no comeback in my country, don’t know about the US.

Governmental and commercial cctv has checks and balances. Domestic just goes onto planet wide databases with no control.


That notion isn't universal. In Germany, for instance, I can't install a camera pointing to the street.


I understand that completely. Just wanted to give a different viewpoint on that.

I'm all for finding a balance, it's just that many times people are against surveillance that does actually improve security or enforcement but mildy infringes on their "rights" when in reality they never had privacy in that situation to start with and the use of technology didn't substantially change that.

Youtube being forced to give up personal information based on who viewed a video is something I don't see as an issue. How is this any different from any other website getting the exact same order?

If you are doing something shady you know how to obfuscate that information, if you aren't, sure your "privacy" was "violated" for sure but it was violated in a way that was legally allowed and by law enforcement at that.

Living in a surveillance state where I have no choice but for the government to be able to track every single transaction I make financially and being able to link my cell number amongst other details directly to me, I feel like if I had to try to fight that I would only be causing myself undue anxiety and I've got enough legitimate reasons to be anxious.


>Youtube being forced to give up personal information based on who viewed a video is something I don't see as an issue. How is this any different from any other website getting the exact same order?

Scale. This isn't "supbpeona to get all of Bob's info", it's "subpeona to get information on all of the people's info tangentially related to bob". Imagine if this was as tangential as "who watched this video with 10m views"? is the YT history of 10m people worth it? Is it even useful?

The issue comes down to whether or not "Youtube" is a public place. All logistical terms point to "no", hence this story.

>your "privacy" was "violated" for sure but it was violated in a way that was legally allowed and by law enforcement at that.

That isn't how court orders work. They cannot make a single order to search an entire neighborhood's worth of houses because of drugs or whatever. That'd be N orders which may or may not go through based on the arguments made.


> and the use of technology didn't substantially change that.

This is complete BS. Technology made it scalable to track where everyone is and query it historically. This used to require tailing someone so it couldn’t be done at scale.


That same technology has also dramatically increased the cost of doing that.

Data isn't free and processing big data isn't cheap. As much as Google has the data, that means they need to store that data.

You know what used to happen before and still happens now, an example. I live in a restricted access area. Restricted in the sense rhat to get in some guy needs to take your name and license plate.

For many many businesses parks in my country that is still the defacto. There isn't really a camera watching that other than general CCTV that probably doesn't have the resolution to pick up text on our license plates. It's cheaper for them to literally pay a guy to stand at a boom and get that information than to install the technology required to track that automatically.


> It's cheaper for them to literally pay a guy to stand at a boom and get that information than to install the technology required to track that automatically.

It depends of the local cost of labor, also the technology is easier to scale, imagine New York City having employees at the bridges writing all the entering license plates! And searching through those records how many times a certain plate entered the city on a given time frame. To me the problem with technology is that they’re used for lazy policing to just inflate the numbers of solved cases. There were cases of cops feeding hand-drawn suspects to face recognition software. Every case becomes a “throw something to the wall and see what sticks”.


Your complaint seems more like a failing legal system than unnecessary surveillance.

Legitimately if an investigator put a hard drawn sketch through facial recognition and that was even remotely allowed into evidence by the court then the suspect evidence wasn't the issue


I don’t recall the actual case but what I try to point out is that technologies are used as dragnets to “fish anything” be it facial recognition, cell tower logs or license plate reads. I’m all out in favor of using any tool to catch criminals but not to manufacture them, specially when the only goal is revenues for the agency du jour.


> Data isn't free

The adtech industry made data and its processing not just free (as in more than covered by the ad revenue) but outright profitable.

This is frankly a one-in-a-lifetime gift to the government because we've not only built an unaccountable industrial-grade spying machine but the government doesn't even have to pay for it as it pays for itself and incentivizes its own expansion.


"Hunters don't kill the innocent animals - they look for the shifty-eyed ones that are probably the criminal element of their species!

If they're not guilty, why are they running?"


I never said any of that.

What I said is for this specific point a smart criminal won't get caught and you too can very easily obfuscate that very same data.


Thank you for so eloquently explaining the bootlicking and privacy not caring mindset I’ve never understood. Also sorry that I can’t come up with a less worse way to say that


Unless you're wealthy and powerful.

I guarantee the very wealthy or politically powerful have plenty of very-well-hidden cameras surrounding their properties.

Those rules are to keep you from catching and proving the powerful doing something they shouldn't.


I... well, I will be honest, this is the first time I've heard someone arguing that street facing CCTV was meant to catch _that_ kind of wrongdoing.

For the German context, and for the kind of CCTV I'm talking about, it makes no sense thou.


Are businesses allowed to install cameras facing the street?


> If some person was able to pick me out from a lineup because they physically saw me then that wasn't private and privacy laws don't apply

It’s not an invasion of privacy. But it is a problem for other reasons

https://nobaproject.com/modules/eyewitness-testimony-and-mem....


So, when your in your own property, cellular tower shouldn't be allowed to allow your mobile phone to register? Because they will record your IMEI while you are in your private property.


Yeah but the electro-magnetic spectrum is a limited public good. You don’t own your broadcasted radio waves in the same way you own your house. Your cellphone is a pollutant.


Both the radio waves his cell phone emits, and the information (voltage change of an ADSL line or photons moving in an optic fiber) used to communicate over the Internet, actually leave his home, and then are registered. So I think in nature it's the same as sending a letter. So let's symmetrically consider that you send a letter, and police/agency asks the post office to attach to each letter information (from, to, weight, stamp...) the phone number from their database. If that happens for all letters going through a given sorting room, I can understand how that's an abuse.


> Privacy has been dead for a long time. The worst part is people don’t care.

I would argue “people don’t care” because… there isn’t a high enough number of people who suffer negative consequences from “their privacy being invaded”.


Maybe people would care more if there were more then two viable political parties to choose from?

Getting rid of First Past The Post voting in favor of something like Ranked Choice voting would allow people to vote 3rd party with no chance of a spoiler effect. This would introduce competition into the electoral process, improving the quality of candidates available to choose from. Even from within the current two mainstream political parties.


You play right into their hands by being demoralised (and trying to spread that to others)


Most people don't know. Or if they know, they don't understand the implications. As Computer Scientists, part of pur whole shtick is to try to spread that lnowledge far and wide. Most, I hazard, spend precious little time on that particular responsibility.


Most people think Facebook secretly records all their conversations because their ad tracking is just that good. They don't know the root cause but they absolutely do understand the implications.


"people do not care" - Please stop repeating this false statement. When you repeat it you give it legitimacy, and take the time when other statements could be made.

Most people are helpless to make change. Greater than one million adults serve in uniform services of some kind where they literally must comply. The ad budgets and massive, overflowing volumes of money generated by "surveillance capitalism" buy the consent of the mercenary finance occupations. None of this means "nobody cares"


>Please stop repeating this false statement.

Society, please stop making it true.

>Most people are helpless to make change.

you get even 10,000 people to petition something to the government and you can get something rolling. This relatively moderate post probably had 10,000 views. You don't need to do much but you just got to get enough people to care enough to spend 10 minutes making a request. If they can't even do that much... well, they don't care.

This is the issue with an individualistic mindset, you hyperfocus on what immediately benefits you. Not the wider community around you which is needed for such petitioning.


[flagged]


You can have privacy and many of these things. You may be interested in homomorphic encryption or the weaker version differential privacy. There are such things as zero knowledge proofs.

But I think it is far easier to have these technologies without doing the encrypted aspects or protecting privacy. Then it is more a "never let a tragedy go to waste" situation. Paved with good intentions, right? This will always happen though as we rush and are unable to do things the right way. Often the right way can take the same amount of time but generally appears to take longer and that is enough, even if the wrong way actually takes longer. Because it also matters at what resolution we're concerned with.

So I'm saying it is mostly stupidity, not evil. Though evil loves and leverages stupidity.


you honestly think there is some masterplan? I love a good conspiracy theory but that is nuts.

Nobody has a made a plan so it hasn't been long planned. Shit is just happening and we as a society, a culture and a race are adapting to it like we've always done.


Not a masterplan, but definitely guides. I mean there are all sorts of meetings that occur where the rich and powerful meet and arrange this or that - eg Bilderberg, WEF, Trilateral Commission, UN, WHO, etc. These aren't elected bodies, but somehow all the governments act in tandem according their pronouncements. Its as if voting doesn't matter, as if its merely a pressure release valve.

Elsewhere I posted about technocracy inc - which Elon's grandfather was involved with. (https://newsinteractives.cbc.ca/longform/technocracy-incorpo...). Just 2 generations later, and you have this guy apparently putting out electric cars, space ships, neural laces, etc. Its more coherent than you think. Bill Gates's father headed Planned Parenthood.

It seems to me that there are a group of very rich individuals that try to shape policy and direct this or that. Its really a very natural state I think - we see this everywhere - eg a headmaster of a school will direct the school towards this or that, a CEO does the same in their company. This is the same principle, except at a far higher level. I don't think its even debatable that this is the case, tbh.


Well, I guess we're just naturally evolving and adapting into enslaving ourselves, just following nature's course... There doesn't need to be a "master plan", but clearly there are some working themes


Yep. Humans are lazy, greedy, and gullible. Give us a device which feeds our brains constant little dopamine hits and we will sacrifice anything for it.

Remember the experiments with rats and pigeons that could administer hits of plesure giving drugs? They all hit that button til they died. We are no different.


well, humans do teach and practice self control and restraint, so I wouldnt say we're all the same as rats. We just need more clever devices. The variety of content on the internet, constantly updating, means this can tailor to almost any taste.

Even then, sure. there have been extreme examples of such things like a few korean individuals dying in their rooms playing MMOs, not even taking enough care to feed themselves.


We know of the ruling class regularly getting together to plan, and surely they meet and collude far more often than we know about. Of course they are making plans.


This sounds scary, and yet I seem to be unharmed.


Shall we wait on the laws until you personally come to some harm?


No, but the argument did use "you" to imply that the reader was harmed. I consider that an illegitimate scare tactic. It would be better to talk about how someone else might be harmed.


If you weren't one of the 30k watching the video, you are the "someone else".


A case is criminal gangs buying from data brokers to scam elders


Then they came for me. And there was no one left to speak out for me


Blackstone: "It is better that ten guilty persons escape than that one innocent suffer"

So not sure where you got the impression he's okay with up to 100 people being disturbed so we can catch one bad guy.

But then, he wasn't really talking about that was he? Better the guilty go free than the innocent suffer what? He was, essentially, talking about the principle of innocent until proven guilty; that innocent people shouldn't suffer by being punished for a crime unjustly.

2999 innocent people, in your formulation, though, are not being punished for a crime. They're not even being accused of a crime.


> innocent people, in your formulation, though, are not being punished for a crime. They're not even being accused of a crime.

They are, however, being harmed.

It's easier to use historical examples because they're not afflicted with modern politics.

The FBI was known to investigate and harass civil rights leaders during the civil rights movement. Suppose they want to do that today.

Step one, come up with some pretext for why they should get a list of all the people who watched some video. It only has to be strong enough to get the list, not result in a conviction, because the point is only to get the list. Meanwhile the system is designed to punish them for a thin pretext by excluding the evidence when they go to charge someone and their lawyer provides context and an adversarial voice, but since their goal here isn't to gather evidence for a particular investigation, that's no deterrent.

Step two, now that they have the list of people interested in this type of content they can go on a fishing expedition looking for ways to harass them or charge them with unrelated crimes. This harms them, they're innocent people, therefore this should be prevented. Ideally by never recording this type of information to begin with.

There is a reason good librarians are wary about keeping a history of who borrowed a particular book.


Surely you are not contending that Blackstone was of the position that no innocent person should be investigated, however briefly, unless it results in at least 10 convictions.

I very much agree that (some, probably minimal) harm is being done to these people. Pretending that they "suffer" in the sense Blackstone was using the word is disingenuous.


Being investigated is a red herring. The problem is from the other end. Your premise is that a person being investigated when they're innocent of the original crime is basically harmless because the investigation will come to naught. The actual issue is that if they can find a pretext to get a list of all of the people who viewed some content they don't approve of, now they have a list of targets with which to play "bring me the man and I'll find you the crime" and that is a harm in need of preventing.


> Your premise is that a person being investigated when they're innocent of the original crime is basically harmless because the investigation will come to naught.

Not at all. I would say that it's usually (not always!) small in the particulars but adds up in aggregate, and that we should be a lot more careful with how much surveillance we allow.

I just would also say that the kinds or amounts of harm being done there are manifestly not what Blackstone was talking about in his "formulation" as it leads immediately to absurd conclusions that go very well past the present case.

I will not here that "there is a concern here analogous to Blackstone's ratio" is a different thing than, paraphrasing what was up thread, "this is substantially more extreme than Blackstone's ratio should forbid".

And in case I haven't said it in thread anywhere, I share concerns about surveillance. I just think if we are enlisting support from historical figures, we should find a quote where they're talking about the question or acknowledge the distance, rather than pretending the quote means something it didn't - that will only turn off those who might be persuaded.


> "there is a concern here analogous to Blackstone's ratio" is a different thing than, paraphrasing what was up thread, "this is substantially more extreme than Blackstone's ratio should forbid".

I agree with this. What's happening here is different than the scenario in the original ratio, even though it's a similar concern.

> I just would also say that the kinds or amounts of harm being done there are manifestly not what Blackstone was talking about in his "formulation" as it leads immediately to absurd conclusions that go very well past the present case.

If we direct ourselves to the case at hand, I'm not sure that a general rule that the government can't compel innocent bystanders to assist an investigation against their will would even be a net negative, much less cause serious problems. When a crime is committed people will generally be inclined to help bring the perpetrators to justice, because who wants thieves and murderers and so on going unpunished? Whereas if someone is disinclined to help, we might consider that they could have a reason, e.g. because the law being enforced is unjust or they believe the investigation is not being conducted in good faith, or they simply don't trust the government with the information, at which point the ability to refuse acts as a reasonable check on government power.

> I just think if we are enlisting support from historical figures, we should find a quote where they're talking about the question or acknowledge the distance, rather than pretending the quote means something it didn't - that will only turn off those who might be persuaded.

I feel like historical quotes tend to detract from discussions in general, because they're effectively an appeal to authority and then the discussion turns to exactly where we are now, debating whether the current situation can be distinguished from the original, which is a separate matter from whether what's happening in the modern case is reasonable or satisfactory in its own right.


> What's happening here is different than the scenario in the original ratio, even though it's a similar concern.

Correct me if I'm wrong, but I'm pretty sure Blackstone wrote about negative or natural rights.

In fact, let me pull out more context around the exact quote. He specifically addresses direct punishment but immediately after is the nature of having the duty to defend one's innocence. Which is exactly the case here.

  Fourthly, all presumptive evidence of felony should be admitted cautiously, for the law holds that ***it is better that ten guilty persons escape than that one innocent suffer.*** And Sir Matthew Hale in particular lays down two rules most prudent and necessary to be observed: 1. Never to convict a man for stealing the goods of a person unknown, merely because he will give no account how he came by them, unless an actual felony be proved of such goods; and, 2. Never to convict any person of murder or manslaughter till at least the body be found dead; on account of two instances he mentions where persons were executed for the murder of others who were then alive but missing.

  Lastly, it was an antient and commonly-received practice that as counsel was not allowed to any prisoner accused of a capital crime, so neither should he be suffered to exculpate himself by the testimony of any witnesses.
I would not be surprised if Blackstone found the act of investigation without the qualification of sufficient suspicion as gross injustice and directly relevant to his intent. As this is a less inconvenient version of locking everyone in a room and interviewing them checking their pockets for stolen goods before they leave. The negative or god given right of innocence is innate. The punishment is the accusation and search, which is an explicit infringement on the natural right. Yes, rights can be infringed upon, but not without due cause and not simply because one is in a position of authority.

I know that this is a point of contention in this (these) discussions, but I stand by that a right is being violated and harm is being done by the simple act of investigation. Mass surveillance (which is mass investigation), is an infringement on our god given rights. The point is to have friction for the infringement of rights. All rights can be violated, but they must need sufficient reason. It does not matter if these rights seem inconsequential or not. Because at the end of the day, that is a matter of opinion and perspective. Blackstone was writing about authoritarian governments and the birth of America was similarly founded on the idea of treating government as an adversary. These were all part of the same conversation, and they were happening at the same time.

I do not think I am taking the historical quote out of context. I think it is more in context than most realize. But I'm neither a historian nor a lawyer, so maybe there is additional context I am missing. But as far as I can tell, this is all related and we should not be distinguishing investigation (or from the other side of the same coin, exculpation) from punishment as these are in the same concept of reducing one's rights. They are just a matter of degree.

https://oll.libertyfund.org/titles/sharswood-commentaries-on...


> He specifically addresses direct punishment but immediately after is the nature of having the duty to defend one's innocence.

The issue is that the ratio can't mean much outside the realm of a criminal conviction when any of the rest of it would need a different standard.

Suppose we want to evaluate if it's reasonable for the police to search your residence for a murder weapon. Should we let 100 guilty people go free to avoid one search of an innocent person? That's probably not right, a search is enough of an imposition to require probable cause, but if you had to prove the crime to the same level as would be necessary for a conviction in order to get a warrant then searches would always be superfluous because they could only happen in cases where guilt is already fully established without the results of the search.

Conversely, with this YouTube kind of situation where the police want data on large numbers of people, the majority of whom are fully expected to be innocent, they're not even reaching probable cause for those people. Which is a lesser standard for justifiable reasons but it's still not one which is being met for those people. And so it's still a problem, but it's a different problem with a different standard.


I find this interpretation odd. I do not see the numbers as meaningful in a literal sense but rather in a means of making a point and a grounding for the surrounding abstraction. I think the point is to explicitly discuss these bounds and view them as a spectrum. To think of them in the abstract but to push back against authority.

Certainly Blackstone was not saying that infringement of rights (punishment) should not happen under any circumstance. Rather that there should be significant friction and that we should take great care to ensure that this is not eroded.


Another reason a lot of people got their hackles up is that you also had the ratio backwards:

> Supposing every person watched that video 10 times AND supposing the target was one of the viewers (it really isn't clear that this is true), that's 2999 people who have had their rights violated to search for one. I believe Blackstone has something to say about this[0]. Literally 30x Blackstone's ratio

"3000 innocent people for every one possibly guilty" isn't 30x Blackstone's ratio, it's 300,000x and worse, because the ratio is "100 actually guilty people for every one innocent". Of course, this actually helps your argument -- violating the rights of thousands of innocent people is unjustifiable -- but once you've given everyone cause to pause and work out what's wrong, they're going to reply with whatever they can find.


So what? We're going to derail an entire point because a gaff was made and everyone still understands the argument?

It's worth pointing out, but not worth derailing an entire conversation. It generates noise that prevents us from actually discussing the issues at hand. We're people, not computers. We can handle mistakes (and look how much work we put into computers to make them do this). And this thread blew up, you aren't the first to point it out. So forgive me if I'm a bit exhausted.[0]

I've made several mistakes (including the first blackstone link not pasting and pasting the whole comment instead of the specific part I was responding to (thanks firefox)), and so have you, and others. But let's not make the conversation about that. We'll never get shit done. We can take an aside to resolve any confusion, but it is an aside. Clearly by your explanation here you understood the point. And clearly we know that the number itself is arbitrary. Are we gonna shit talk everything Franklin said because he used 100 instead of Blackstone's original 10? No, because the number isn't what's consequential.

[0] We do meet each other here a lot and I have respect for you. It's why I'll take the time to respond to you. But I also know you to be better than this. I think you can also understand why it can be exhausting to be overloaded with responses and with a large number of people trying to tear down my argument by things that are not actually important to the argument. Specifically when the complaints make it clear that the correct interpretation was actually found. I'm happy to correct and appreciate mistakes being pointed out, but too many internet conversations just get derailed this way. The distinction of correcting vs derailing is critical, and the subsequent emotional response is clearly different in the two cases.

I'm happy to continue the conversation w.r.t the actual topic (even where we disagree), but it seems like wasted time to argue over a gaff that we both know was made and we understand what was said despite this.


This is true of literally any investigation though


It isn't. If the police are investigating John Smith and they get a warrant for the files of John Smith then they don't also get the files of anybody else along with them.


And importantly, there has to be sufficient reason for investigating John Smith. It can't be arbitrary (he looks funny, has a limp, is black, is gay, plays Doom, is a Muslim, etc). Rights can be infringed, but they need reason. And they need good reason.


How do police find out that John Smith is the person whose files they want to get a warrant for?

Maybe because John Smith was one of only eleven people who signed in to a building on the day a crime took place, and he signed out right after the crime happened.

But should the police not look at the sign-in sheet at the building because that will infringe the privacy of ten innocent people?


> How do police find out that John Smith is the person whose files they want to get a warrant for?

The victims go to the police and tell them that John Smith stole from them, so the police go and seize his files to confirm that the victims are telling the truth.

> But should the police not look at the sign-in sheet at the building because that will infringe the privacy of ten innocent people?

Asking for the sign-in sheet and seizing the sign-in sheet by force against the wishes of its owner are two different things.


I would contend that Blackstone was of the position that no innocent person who was not of sufficient suspicion of committing a crime should be investigated.

These are innocent bystanders. There is nothing suspicious about their activities other than they did something that a suspected criminal did. A perfectly legal activity? To take this to the ridiculous side, are we going to investigate everyone who took a poop at a specific time because a criminal did?

https://www.youtube.com/watch?v=DJklHwoYgBQ


Wait. Aren't "innocent bystanders" literally the first people the police wants to get ahold of, to interview and yes, potentially investigate if there's something off? People don't spontaneously become suspects, as if by radioactive decay; some degree of investigation comes first and is what turns "innocent rando" into a suspect.


> Aren't "innocent bystanders" literally the first people the police wants to get ahold of, to interview

Yes. But that's not the same

> potentially investigate if there's something off?

If you're asking information from people who witnessed a crime *and volunteering information* (which is not investigating that person and not accusing them of a crime, nor is lack of volunteering information a suspicious activity) and they then generate suspicious evidence, then yes, that enables capacity for investigation. It is true that things are not static, time exists, and entropy marches on.

That's the difference. There is nothing that these people did that warrants suspicion. These people are not being asked or questioned. This was not done voluntarily. They didn't even know this was happening to them. This was a thing imposed upon them, full stop.

I want to give a scenario to help make things clear. Suppose I send nudes to my partner. The government intercepts these without my knowledge, looks at these, and deletes them, and literally nothing else happens. Is this okay? I did not know this happened to me. No "harm" has fallen upon me. And as far as I know, nothing has changed in my life. But then later I find out this happened. Let's say 20 years later. I feel upset. Do you not think I am justified in being upset? I think I do. My rights were violated. It is worse that it was done in secrecy because it is difficult for me to seek justice. It is because I have the right to privacy. It is a natural, de facto, negative, but a god given right. They put my information at risk by simply intercepting it and making a copy. It was unnecessary and unjustified.


> Blackstone was of the position that no innocent person who was not of sufficient suspicion of committing a crime should be investigated

I expect so. But pretending that's what he was talking about in the quote you were referencing is going to undermine your (our, probably) position with those not already convinced.


I'm not convinced I am taking him out of context[0]. Was Blackstone not also discussing natural rights? I see him as viewing punishments as infringements on ones rights. As a spectrum. And those rights even including the simple aspect of presumption of innocence. My best understanding is that so much is literally about the mundane and simple. Because natural rights are... well... natural. They are things we have until we don't. That's why they are called negative rights, because they need be removed, not given. Punishments (infringements) can be extremely minor to major. But they are still one in the same because it is about the concept in the abstract. Or rather, in generalized form.

As far as I can tell, this is explicitly within the context of the quote.

That said, I do see your point and appreciate your feedback. Maybe this can be an opportunity to turn this into a lesson? It seems too cumbersome to bring up from the get-go and similarly backfire. But discussing in the abstract is a difficult task when it is neither a natural way of thinking nor is it a common way that is taught. But I still think it is an important tool and something that does distinguish humanity. I am open to suggestions (I think HN of all places is more likely to be successful when discussing things in the abstract, but it is still general public).

[0] https://news.ycombinator.com/item?id=39798280


> They are, however, being harmed.

No they're not. Which ones will live a day less of their lives?

If I'm on surveillance footage near a crime scene, police have the right to look for me and question me. This isn't any different. It's just different sets of photons and electrons.

I respect the rights to privacy, but a crime happened, and the police have the tools to investigate. It's barely an inconvenience.

The burden of proof will still be on the investigators and prosecution to find out and show beyond a shadow of a doubt who performed the swatting.


> Which ones will live a day less of their lives?

The ones who, having had their political inclinations revealed to adversarial law enforcement, then become subject to harassment for those views which should have been private.

> If I'm on surveillance footage near a crime scene, police have the right to look for me and question me.

The question is whether they should have the right to seize the surveillance footage by force if the proprietors would rather protect the privacy of their users. The third party doctrine is wrongful.

And given that it exists, so is keeping records like this that can then be seized using it.

> The burden of proof will still be on the investigators and prosecution to find out and show beyond a shadow of a doubt who performed the swatting.

This is assuming they're trying to prosecute a particular crime rather than using a crime as a pretext to get a list of names.

And it's about the principle, not the particular case. Suppose a protester commits a crime and now they want a list of all the protesters. Any possibility for harm there?


> The question is whether they should have the right to seize the surveillance footage by force if the proprietors would rather protect the privacy of their users.

If there was a crime committed outside your home and you have surveillance footage that has captured passers by, you would not offer it to the police because you would rather protect the privacy of the all the anonymous passers by when one of them is likely the culprit?

That strikes me as highly unlikely. And if you wouldn’t, I am willing to bet that most people would. Why care about the privacy of anonymous passers by when you can help catch the perpetrator and increase safety around your home?


>Why care about the privacy of anonymous passers by when you can help catch the perpetrator and increase safety around your home?

well if we're resorting to hyperbole comparing a murder to "watching a youtube video": say you knew and had multiple whistleblowers pass by in your footage, and they are all wanted by the government. You turning over the footage puts those whistleblowers in danger, who's only "crime" is revealing government corruption. Is catching one crook worth endangering multiple good people?


> If there was a crime committed outside your home and you have surveillance footage that has captured passers by, you would not offer it to the police?

These are not the same. You might think the difference is subtle, but I'll tell you that that subtly matters. And matters a lot.

And tbh, these two scenarios are quite different.


I think the analogy is rather strong. Where does it differ?


In one case the homeowner has surveillance footage and freely offers it to the police because they want to assist the investigation. In the other case the police seize the footage by force even though the homeowner is totally innocent and might not trust the government with a record of all of their own comings and goings and associates etc.


Just confirming that this too is the distinction I see. I'll expand since there is so much confusion around this:

The difference is how information was gathered.

People volunteering information to an authority? Perfectly fine (especially in cases when information was not requested).

People being compelled to provide information? Needs friction (checks and balances).

People being compelled to provide information about others who then unknowningly being investigated? Needs even more friction.

It's also important to note that in the hypothetical that random passerbyers are not being investigated either. A specific type of behavior is being sought. Either the explicit act of the crime being committed or a STRONG correlation with another piece of evidence (such as already knowing what the criminal looks like and trying to find a better view). Random people are not considered suspect.

In the article's case all viewers were considered suspect.


> No they're not. Which ones will live a day less of their lives?

There are cases like the bombing in Madrid where the US agencies cast out a wide net over possible suspects using data about people who converted to Islam and then used a bad finger print match (which everyone told them was garbage) to terrorize one suspect for weeks. They had no evidence that the guy was involved, they had no evidence that any of their suspects was involved, but they had a narrative and where happy about every bit of data that supported it. Meanwhile Spain convicted the actual bombers.


> There are cases like the bombing in Madrid where the US agencies cast out a wide net over possible suspects using data about people who converted to Islam and then used a bad finger print match (which everyone told them was garbage) to terrorize one suspect for weeks.

Some hyperbole in your telling of the story and failure to mention that he was awarded restitution. According to Wikipedia:

Brandon Mayfield (born July 15, 1966) is a Muslim-American convert in Washington County, Oregon, who was wrongfully detained in connection with the 2004 Madrid train bombings on the basis of a faulty fingerprint match. On May 6, 2004, the FBI arrested Mayfield as a material witness in connection with the Madrid attacks, and held him for two weeks, before releasing him with a public apology following Spanish authorities identifying another suspect.[1] A United States DOJ internal review later acknowledged serious errors in the FBI investigation. Ensuing lawsuits resulted in a $2 million settlement.

https://en.wikipedia.org/wiki/Brandon_Mayfield

What point are you trying to make with this example?


That it should never have happened?


And? As someone harassed and stalked by police officers (many years ago now) I assure you, restitution after the fact fixes nothing. I’d rather not be harassed and stalked in the first place, especially when such activities have led to a life long distrust of the people who in theoryyy I should be able to turn to in trust. Today I wouldn’t engage with a police officer regardless of circumstance. they are just another armed gang to me.


What's your point? Someone wrongly being detained/prosecuted/charged/hung/etc in the past does not make it right now. It doesn't make it any less wrong then. Nor are people of a government homogeneous. Especially America. The great American past time is shitting on America. But if you're going to dismiss wrong because wrong was done in the past (or even let it slide or be apathetic) that is enabling. Being upset and angry is very different than apathy.


My point was that it was wrong. I do not agree with casting wide nets in the hope that someone might fit whatever profile the police or other agencies have pulled out of their asses.


Your point came off as whataboutism. This may not be what was intended, but this is how I interpreted it and it appears that others did as well. Thank you for clarifying and I'm sorry we miscommunicated.


> Which ones will live a day less of their lives?

Your liberties encompass so much more than this and a government that treads on them recklessly does far more damage than to simply waste an individuals time.

> It's barely an inconvenience.

You assume it's not. How would you verify this? Why should you have to?


> Your liberties encompass so much more than this

You don't have a liberty from being investigated if they have evidence. They're not snooping around in your home without cause. The swatter was watching the live stream, and the timestamped IP logs can corroborate.

Just because it was an IP address and not a face or license plate on camera doesn't make it any different. You can't hide behind a chosen technology stack as a shield when the fundamentals of the case are the same.


> You don't have a liberty from being investigated if they have evidence.

The evidence has to be specific to an individual. In these cases they're obviously not.

> The swatter was watching the live stream, and the timestamped IP logs can corroborate.

He wasn't the only one doing so.

> Just because it was an IP address and not a face or license plate on camera doesn't make it any different.

Yes it does. There are wildly different expectations of privacy between these two scenarios, this is immediately apparent, and easily demonstrable.

I feel like you're just trying to win an argument and not actually thinking this through. I'm not saying you're fundamentally wrong on facts it's just that to follow your conclusions blindly does in fact violate individual rights, and those rights are superior to the governments "right" to investigate crime.

> You can't hide behind a chosen technology stack as a shield when the fundamentals of the case are the same.

You can't hide behind weak evidence to violate the privacy of groups of individuals. The crime has already occurred. The damage is done. You can't solve that problem by causing _more_ damage.


> This isn't any different. It's just different sets of photons and electrons.

And a dictator is just another set of cells and organic compounds? You can't break things down into this because then literally everything is the same. Literally everything you see is just a different set of photons and electrons. But those things have real effects. They aren't fungible. I don't care that my partner sees pictures of me naked, but I sure do care if cops or "the government" is, despite it being "just a different set of photons and electrons."

> The burden of proof will still be on the investigators and prosecution to find out and show beyond a shadow of a doubt who performed the swatting.

The burden of proof is step by step. I don't think I should have to cite the 4th Amendment but

  The right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures, shall not be violated, and ***no Warrants shall issue, but upon probable cause, supported by Oath or affirmation, and particularly describing the place to be searched, and the persons or things to be seized.***
The setup was to treat the government as an adversary. Needing to understand positive rights vs negative rights[0]. Obviously rights are not infinite, but there should be friction. Doesn't matter if the thing is seemingly innocent or inconsequential, what matters is power. Perception shifts and creeps so this is why people take a stand at what might seem trivial. [1]

[0] https://en.wikipedia.org/wiki/Negative_and_positive_rights

[1] https://encyclopedia.ushmm.org/content/en/article/martin-nie...


> The setup was to treat the government as an adversary. Needing to understand positive rights vs negative rights

+1 on citing the constitution's wisdom of treating the government itself as adversarial, due to the enormous power it has.

+1 on pointing at the difference between positive and negative rights in this context.


There was a swatting incident and they have a time window where they'd like to corroborate IP logs.

An IP address is no different from a license plate on camera. It's a lead and the evidence was gathered at a crime scene. Nobody's home is being entered into. Nobody's iCloud account is being unlocked and ransacked. Gathering these logs alone won't lead to those things happening either.

I'm all for limits on power, but this seems to be entirely reasonable. This isn't a fishing expedition. IANAL, but I don't see how the 4th would be violated with either a court order or willing third party handing over the logs.

If the investigators get the IP logs, they shouldn't then be able to take those logs and ask the ISP for everything that those people were doing. The burden will be on the investigators to find more evidence linking one of those IPs to the call.

More crime will happen digitally year by year. Swatting has already entered the public consciousness. Just wait until people start strapping bombs to FPV drones or calling grandma with your voice.

We shouldn't stop at the software stack as some kind of impenetrable legal barrier that shrouds investigation. We should respect and enhance limits on power, but we also need to modernize the judicial tools to tackle the new reality.

The framers couldn't have imagined "swatting". The law needs to understand this. It should provide scoped-down investigatory tools that simultaneously guard and respect our constitutional rights and privacy. Access to anything beyond the scope of an actual crime that took place should be restricted.


In your story, the injustices are 1) the police going on a fishing expedition, and 2) the police using the data gained through an investigation to unjustly harass people. Those are bad things and we should have laws to prevent that and punish people who do so.

I agree it would be bad if they were making the request in furtherance of a conspiracy to do either of those things.

But the police asking Google for a list of people who viewed a video, though, is in itself not one of those things. It’s similar them asking a business owner whose business has a camera overlooking a street near a crime scene to hand over surveillance footage (which will include innocent passers by) or a business that sells a product which was known to be used by a criminal to provide a list of purchasers of that product (which will include innocent purchasers).

Many such businesses will voluntarily hand over such information to assist with an enquiry. Some businesses might refuse, or might choose not to have such information.

And this is why judges are involved in the process of issuing warrants and grand juries in the process of issuing subpoenas when the police or a prosecutor want to compel the production of evidence of that sort.

But it just seems inevitable that, at the beginning of an investigation into a crime where the perpetrator is unknown, the first step is to identify possible suspects; by definition not all of the people so identified will end up being investigated. How are the police to do that if they can’t ask anyone for information that might bring innocent people’s names to their attention?

I appreciate it seems idealistic maybe, but it feels to me that we need rules that ensure ‘coming to the attention of the police in the course of an investigation’ is genuinely harmless; not rules that assume it automatically exposes you to harm.


>And this is why judges are involved in the process of issuing warrants and grand juries in the process of issuing subpoenas when the police or a prosecutor want to compel the production of evidence of that sort.

That's the issue, court orders aren't free to make and factors like "it is filming a public street" are taken into account. There isn't anything "public" about "viewing a video stored on a server of a large private website". And there enlies the rub.

Also, the story here isn't just "get me a list of 30k people who watched a video", which may be reasonable:

>The court orders show the government telling Google to provide the names, addresses, telephone numbers and user activity for all Google account users who accessed the YouTube videos between January 1 and January 8, 2023. The government also wanted the IP addresses of non-Google account owners who viewed the videos.

They want ALL your Google activity for a week, because you watched a video that may or may not have been recommended to you by Google itself. that can include schedules, emails, financial transactions, Maps inquiries, chat records, etc. Depends on how much you use google, but Google can power a lot of aspects of life these days.

Even if you aren't on Google you have your IP revealed for simply viewing a video. That feels like an overreach.

------

The second factor is that they barely have a specific suspect. That just think "they saw this video -> they may be money laundering":

> Google to hand over the information as part of an investigation into someone who uses the name "elonmuskwhm" online.

I can't believe that passed a court order. some random handle is selling bitcoin and may have watched this video, so lets get all the data of everyone who watched this tutorial at this time.

>How are the police to do that if they can’t ask anyone for information that might bring innocent people’s names to their attention?

by narrowing it down to more than 30k people. That can be an entire town for some smaller areas

>but it feels to me that we need rules that ensure ‘coming to the attention of the police in the course of an investigation’ is genuinely harmless; not rules that assume it automatically exposes you to harm.

in my mind this is the more idealistic scenario. They've had decades to espouse this sentiment and they aren't even close to doing so.

Also, the issue is that it's not like the government deletes this data after they are done. Quite the contrary. Maybe the US government needs its own GDPR protocol so this won't be pulled up on record down the line.


Well I forgot to link but from the wiki

  Other commentators have echoed the principle. Benjamin Franklin stated it as: "it is better 100 guilty Persons should escape than that one innocent Person should suffer"
I went with Franklin because we are specifically talking America but let's be honest, the number doesn't matter and it seems you agree. Let's focus on that. Because I'm 100% with you, this isn't even people who have been accused. Which even those accused have rights.

https://en.wikipedia.org/wiki/Blackstone%27s_ratio


You've still got the ratio backwards. Franklin says if you know 101 people watched a video, and 100 of them are guilty of a crime, you can't just round up all 101 and throw them in jail. I.e., if you have a standard of punishment that would convict even one innocent person for every 100 guilty people it catches, it's not a good standard.

Which I think we can all agree with.

But that's not what's happening here, is it?


> you can't just round up all 101 and throw them in jail

Yes. "100 people" (or whatever) had their rights violated. Sure, not as bad as jail, but it is still in the spirit. I'm not sure why you think I have it backwards, I think we're just using different perspectives.

But I'm not into being pedantic if we understand one another.


so it is considered punishment to have the watch information revealed to the state?


I agree that it's egregious, but Sir Blackstone was talking about punishment, especially relating to execution, and I think perhaps the ratio can be adjusted significantly downward when the cost to the innocent is much lower. Otherwise, the only reasonable search would be when a government official is already certain of your guilt.


I'd consider your rights being violated "punishment."

Blackstone was talking in the abstract. Clearly Franklin was too considering many of the other things he's known for saying.


> I'd consider your rights being violated "punishment."

You are wrong. Punishment is when you impose a penalty as retribution for an offense.


You can call me wrong, but maybe first ensure you understand negative rights.

I understand why you think I'm wrong, but I hope you understand why I think that way. We can disagree, and that is fine, but let's not act as if there are objective answers in social constructs.

Because, I do think punishment is an imposed penalty. In this case, on your rights. Rights are abstract, and these are not binary nor clearly discrete. House arrest is not jail, nor are fines. But as communicated to you elsewhere, the 4th amendment is about ensuring friction for removing someone's negative rights.

But I disagree that punishment is imposed as a penalty as retribution for an offense. You imply that this requires an actual offense to have been made. I assure you that punishment can be imposed for any arbitrary reason. I can also assure you that punishment is a spectrum, from extremely minor (as I think we'd agree is in this case) and extremely harsh.


You to are having a debate over semantics. Most dictionaries would probably agree that punishment is, definitionally, retribution for an offense. You cannot punish without a reason because that's not punishment, it's just hurting people. Your definition may be different, and that's fine, but you can't logic your way out of a difference of opinion on the definition of the word.


>Most dictionaries would probably agree that punishment is, definitionally, retribution for an offense.

if your entire family is "interrogated" over some crime you did not do, is that "punishment"? Maybe not legally, but it's a stressful time that may or may not cause strains on your relationship between your spouse and kids. And it's not like this is an investigator coming in for tea and asking a few questions, either.

morally, it would indeed be a punishment. There is now a bunch of nervous sentiment among your family for no reason, all because you were at the wrong place at the wrong time. Or if we want to be frank, you looked the wrong race at the wrong place at the wrong time.


Yes, but the semantics here matter.

We're following the ideas of Blackstone and subsequently those that founded the country in question. The US was founded under the idea that natural (or negative) rights were exceptionally important[0] AND that the government should be treated as an adversary (since this is the main body that could impinge upon natural rights). The idea isn't that natural rights can't be violated for any reason, but rather that there needs be friction at every step, including the smallest amount. The reasoning being that they were intimately familiar with power creep.

So yeah, semantics, but literally the semantics that we the topic of overthrowing an entire government for. ¯\_(ツ)_/¯

[0] So much so that they appear in the second paragraph of the Declaration of Independence[1] (which goes on essentially ranting about this topic)

  We hold these truths to be self-evident, that all men are created equal, that they are endowed by their Creator with certain unalienable Rights, that among these are Life, Liberty and the pursuit of Happiness.--That to secure these rights, Governments are instituted among Men, deriving their just powers from the consent of the governed, --That whenever any Form of Government becomes destructive of these ends, it is the Right of the People to alter or to abolish it, and to institute new Government, laying its foundation on such principles and organizing its powers in such form, as to them shall seem most likely to effect their Safety and Happiness.
As well as is in the preamble of the constitution, are the subject of the 1st, 4th, 5th, 6th, (arguable the 8th), and 9th amendments (not to mention those that came later like the 13th).

[1] https://www.archives.gov/founding-docs/declaration-transcrip...


> Supposing every person watched that video 10 times AND supposing the target was one of the viewers (it really isn't clear that this is true), that's 2999 people who have had their rights violated to search for one

I think whether their rights are violated depends entirely on what sort of information is handed over. Consider acquiring surveillance footage that has plenty of foot traffic, but a suspect is known to have passed by. The police are typically permitted to review that footage even though plenty of innocent people were captured on that video.


>depends entirely on what sort of information is handed over.

Apparently:

>The court orders show the government telling Google to provide the names, addresses, telephone numbers and user activity for all Google account users who accessed the YouTube videos between January 1 and January 8, 2023. The government also wanted the IP addresses of non-Google account owners who viewed the videos.

That definitely seems like an overreach.


Any kind of search can be deemed constitutional if it goes through a warrant process, which is the point of warrants. This story is less about the how the information was taken and more about whether or not the warrant process and 4th Amendment rights were properly followed.

This would then be mixed in with the question of whether or not new forms of data (like video views) would equate to previous forms of similar data searches that police have obtained warrants for (like reviewing CCTV).


If it is an order by a court, then i think it is ok. Then it is no mass surveillance and for solving a crime it is useful.

I wonder what kind of video it is. Maybe a shared link, so only people who secretly know about it knew about it, and they have become suspects. Is it mentioned in the forbes article?

And i wonder if people abuse videos on youtube by encrypting the content with a key and the key is then shared.


> If it is an order by a court, then i think it is ok. Then it is no mass surveillance and for solving a crime it is useful.

Why can't a court order be mass surveillance? In these cases, the videos were viewed 30,000 times and more than 130,000 times (if I understand the latter correctly). How is that not mass? Nobody suggests that more than a few of those people are suspects.


My understanding of mass surveillance is, that masses are surveilled. :) But here its a court, that allows extracting log data for a specific case and it happened after the fact.

I make a difference between leaving loggable traces of living (which we leave all the time, no matter what) and sometimes filtering to recapture the past.


That's right even if the sample size N=30,000, it is still a one-time point event controlled/approved by the proper legal authority. There will be an audit trail of said approval and the process will be documented.

In contrast mass surveillance is just "oh, we have a BIG database, and we query whenever for whatever purpose, and nobody knows who searched for what and when and why, and nobody EXTERNAL TO THE AGENCY needs to approve it (lack of control). And today, Bob, who works for the police, background-searched his new girlfriend as well."


emphasis on "Specific". This doesn't feel specific at all. Mass surveillance or not, this feels like a failure of the warrant itself passing.


OP explicitly agrees with you that the 30k is illegitimate, but that's the only one you address. What's your take on the one where the police became aware that they were being watched on a YouTube livestream while responding to a bomb threat and obtained a court order asking for information about who was viewing a set of local livestreams at the specific times where they were searching for the bomb?

What makes that one different than a court order demanding that a business release security footage that covers the scene of a crime for the time window in which the crime occurred? Or would you consider such a court order to also be illegitimate?


Better one person not be crushed to death than two thousand nine hundred ninety nine suffer essentially no harm whatsoever.

No, that doesn’t seem very familiar.


If we're being honest here: it's money laundering. It's not an inherently harmful crime except to the IRS. so apparently it's the most heinous thing imaginable (meanwhile, every corporation...).

No one is in danger, this is an absurd request for an absurd result of... catching a dude using cryptocurrency the way the government doesn't like. Screw the government.


> I don't think any of this appears legitimate.

It isn't.

Democracy is fake.

Our justice system is fake.

Everything is fake.

(Where "fake" = "not what they are advertised or perceived to be.)

If all of the same things were occurring in another country, or even better: in a video game, you would have little difficulty and zero aversion to accepting these facts.

However: put a person into these things, and the brain malfunctions.

If you are a reasonably normal person, your mind will now be filled with objections to this proposition, reasons why I am incorrect. But if you were to state those objections, I can punch holes in every single one of them without even breaking a sweat.

We live in a literal simulation, but not the kind that everyone has been hypnotized to believe is the only kind possible - have you ever noticed that when the notion of simulation theory comes up, it is always The simulation theory (Nick Bostrom's)?

This is a pretty neat trick eh? And there seems to be nothing that can be done about it, because people will fight tooth and nail (using Meme Magic, aka "The Facts", "The Reality", etc) against being extracted.

Thankfully, it is simultaneously hilarious. Well, except the part where millions of children are dying, but nobody cares....but oh boy when a big scary "pandemic" comes along, pull out all the stops.

I hope that there is a Hell, because I would like to see every single member of this despicable 21st century society end up there some day. Seeing justice finally being served for once would be worth suffering for eternity.


>Well, except the part where millions of children are dying, but nobody cares....but oh boy when a big scary "pandemic" comes along, pull out all the stops.

You MAY have had a point somewhere in those ramblings, but this right here just kind of undoes any credibility.


A rather bold (yet oh so common) metaphysical & cognitive claim.

I think it's fun to imagine that pandemics are some sort of supernatural punishment for human hubris in Western nations.


The second one seems a lot more narrow and more legitimate


I remember Larry Ellison (Oracle) in the news saying that Privacy Is Dead a quarter century ago, long enough the article merely referred to it as famous (no date needed) in 2018. It was written six years ago...

https://securitycurrent.com/privacy-is-dead-long-live-privac...


Did you forget the link to Blackstone?



> The first one where the police uploaded videos and wanted viewer information is absolutely egregious and makes me wonder how a court could authorize that.

The police didn't upload they videos. It's not entrapment, and it doesn't sound like the actual content of the videos is illegal.

Instead, they had an open communication channel with their target and were able to send them various links to youtube videos.

Their theory being if they can find any user who clicked on all (or most of) those links, it's probably their target. And it's unlikely some random user would have accidentally viewed all those videos.

The actual request for the raw list of all viewers seems unconstitutional to me. Too broad, gives the police a lot of infomation about all users who watched just one of the videos. But I suspect a much narrower request where google identified the target user and past just that user's info on would be constitutional.


> But I suspect a much narrower request where google identified the target user and past just that user's info on would be constitutional.

Isn't that worse? Essentially making Google do the job of the police and the police having to trust the work of Google for it.


I don't see any problem with trust.

The police will still get the exact same raw data of that one target user. The change just means that they won't get any data on other users.


I certainly concur with this.

On the one hand, a narrow warrant that reveals a lot of people (classic example are warrants on motels to provide the names of everyone who checked in on a certain date, or was registered on a certain date) are certainly constitutional and have been upheld many times.

The first seems, odd.


  In a just-unsealed case from Kentucky reviewed by Forbes, undercover cops sought to identify the individual behind the online moniker “elonmuskwhm,” who they suspect of selling bitcoin for cash, potentially running afoul of money laundering laws and rules around unlicensed money transmitting.

  In conversations with the user in early January, undercover agents sent links of YouTube tutorials for mapping via drones and augmented reality software, then asked Google for information on who had viewed the videos, which collectively have been watched over 30,000 times.
This is the first case. This doesn't seem that narrow to me.


> who they suspect of selling bitcoin for cash, potentially running afoul of money laundering laws and rules around unlicensed money transmitting.

Wait, what? So is Bitcoin illegal to use as a currency now? Special casing exchanges for cash seems completely pointless if you could just buy some of <any commodity> for cash and then turn around and sell it back to the same person for the same amount in Bitcoin, but if every customer has to do KYC of the merchant when they're paying with Bitcoin, how is that ever going to be feasible?


I think a charitable interpretation is that they were suspicious of the money being used illegally. But I'm not sure there's enough information here to make this clear. Because clearly there's misinformation being spread. Claiming bitcoin is anonymous...


More likely they wanted to seize the bitcoin under civil asset forfeiture and buy some high-end cars for their D.A.R.E. program.

Charitably speaking that is.

https://www.autoweek.com/news/a2055556/venom-law-police-put-...


The first is a somewhat clever attempt to unmask someone ann undercover investigator was already talking to. Police should have narrowed scope of the warrant by only asking for data on viewers within a narrow window after they sent the link.

Even better might have been to directly link to some service that they already control on a honeypot URL, and then gone after the ISP for customer details.


Nah actually pretty dumb overall. And sending an open link when a private one looks the same is even more dumb.


A person from rdrama managed to find the FBI victim's anon reddit profile (public information), and apparently it was IRS evasion? https://rdrama.net/h/slackernews/post/255754/google-ordered-...


If you see a YT which is remotely dodgy, don’t watch it… It very well could be planted there as bait.

And that’s great Google are trying to fight back, a little. Though I wonder that for us Non-American Brits that they’d do the same for us too (doubtful)


> The court orders show the government telling Google to provide the names, addresses, telephone numbers and user activity for all Google account users who accessed the YouTube videos..

Hopefully that clarifies for some folks why these big tech/social media companies insist on having your phone number as a “2FA for security” despite all the sim-swap attacks.. simply for this moment, because you might be using a VPN, and address/name aren’t in your google account, but definitely your phone number is there, it’s even worse if you’re using an android too, as they probably will pull out all your app/browsing history..


I'm not saying that there aren't other motives, but there are legitimate security concerns.

Credential stuffing is a huge issue for large providers and requiring 2FA is a huge mitigation. Sure, a targeting attack will make the SIM swap, but that is a huge difficulty upgrade from generic credential stuffing.


Source - am a fairly experienced security engineer.

It’s a nonsense argument to say Google can’t handle credential stuffing without SMS 2FA in place, as in not pushing all 2FA via Google Authenticator and using the very wide reach and talented security team for baseline cred stuffing. Sec tools for this, even without being Google and their very talented sec team, are pretty good.

Wanting a hard phone number is a pure identification play and also about the more likely pragmatic concern (than cred stuffing) of using Google for burner accounts.


How do you handle credential stuffing? Attackers will use a huge number of regular residential IPs or VPNs that you would expect to see logins from. How do you tell a credential stuff from a regular login? They are both coming from unknown IPs with normal login rates and they have valid credentials.


Because there’s a bit more to it than just tracking IPs and rates.


You can avoid this with Google by using a virtual WebAuthN device (ironically via Chrome devtools), and then you will unlock the ability to enroll in MFA with a QR code for an OTP URL.


Which really underscores that all of the MFA stuff is actually about security. Because of course it is.


You can't avoid it during account creation.

So they still have the number.


I was able to avoid it, albeit two years ago, when signing up with Apple "Hide My Email." I never gave Google my phone number for that account.


This sounds like it’s new though? And maybe is for testing/dev and will go away?


No, it doesn't "clarify" anything like that. If google doesn't have phone numbers of some subset of the accounts requested... they will just specify so in their response to law enforcement since it is completely legal and google is not currently obligated by law to have phone numbers of all users of Youtube. Sundar isn't going to prison because of that or anything.

Saying that some PM at Google decided decade ago something like "hey guys let's build a database of our user's phone numbers to satisfy some theoretical future dragnet surveillance request from law enforcement and tell our users that it is for their own security" is actually quite ridiculous conspiracy theory if you think about it.


"Guys, we know our users' names, addresses, all of their emails, browsing history, location history and contacts... but we're missing the critical information! Their phone numbers! Can anyone come up with a security justification for asking for it?"

-Nobody ever.

Come on, use your brain. Even if you are talking about smaller entities who might otherwise only have names and emails, why would they want phone numbers? They don't care about identifying you. And even if they did they already have your email and name.

Step away from the tin foil...


Brief counter, based on adtech knowledge.

Fingerprinting to a user, especially for a bulk request, without something to anchor on like a device id (or phone number), is harder than you make it out to be. End of third party cookies and so on has had an effect.


> why would they want phone numbers

Because it is trivial to make a burner/secondary email address, but much less trivial to do the same with a phone number. Furthermore, everyone adds phone numbers to their contacts but very few add emails, so phone numbers are much more valuable from the perspective of inferring social graphs.

Both of these are extremely valuable for adtech and generic "growth & engagement" scum, thus why all companies matching this criteria started effectively requiring phone numbers. The 2FA/security angle is just an excuse for the true reason behind it.


None of that is related to providing identities to the government, which was his tin foil hat conspiracy theory for why 2FA is used.

I'd buy the spam reduction angle - it's a bit easier to get an email address than a phone number. But I have never seen a service require 2FA (except things like NPM and PyPI; but that's clearly for security) so I don't think it's that either.

I think it's pretty clear that the reason really is security. There's no conspiracy.


> None of that is related to providing identities to the government

Agreed. But I disagree that the true reason is security. The true reason is better stalking which is valuable to adtech scum which now happens to be the vast majority of consumer-grade tech.

> I have never seen a service require 2FA

Try register on Twitter. They'll let you register but then randomly suspend your account for alleged ToS violations (even if the account was outright inactive) but will give you the option of instantly unbanning yourself following phone number verification. Microsoft will randomly lock out MS accounts without a phone number attached and will require a phone number for "security" upon the next login (the security angle being very dubious considering they don't have a number on file to compare to, so even an attacker can pass this challenge just fine). Etc.

> There's no conspiracy.

It's true, there's no conspiracy, it's just business and can be explained by common sense and economics. Phone numbers help tracking people. Adtech makes more money the better it can target its ads. Most consumer tech nowadays is intertwined with adtech. Said consumer tech thus optimizes for higher profit by collecting more data to help adtech.


> Try register on Twitter. They'll let you register but then randomly suspend your account for alleged ToS violations (even if the account was outright inactive) but will give you the option of instantly unbanning yourself following phone number verification. Microsoft will randomly lock out MS accounts without a phone number attached and will require a phone number for "security" upon the next login (the security angle being very dubious considering they don't have a number on file to compare to, so even an attacker can pass this challenge just fine). Etc.

I have accounts with both of these orgs, not equipped with 2FA and none of what you describe has ever occurred.


> Phone numbers help tracking people

That's the conspiracy. They don't need phone numbers for that.

It's mainly security with a sprinkling of spam/bot reduction.

Do you really think NPM and PyPI are doing it to improve their targeted advertising?


[flagged]


The government is literally asking for that data though? It doesn't really matter why Google wants it, the effect is the same. We should be concerned and we should resist giving them as much data as possible, because when the government comes knocking they'll ask for everything they have. And who and what the government wants can change quicker than you might imagine (see how the Texas government is going after women having miscarriages, for example)


This is your choice obviously, you get to choose how much data you give to them. You can also choose not to give any data at all if you wish.


> The government is literally asking for that data though? It doesn't really matter why Google wants it, the effect is the same

But it DOES matter, because a conspiracy (which you’re arguing is happening) is when parties make secret plans jointly to commit an unlawful or harmful act. They have to make hose secret plans jointly for it to be a conspiracy. Weaving that narrative without evidence is the definition of a conspiracy theorist.


Certainly it's better to stick with the factual conspiracies like NSA's PRISM program: https://en.wikipedia.org/wiki/PRISM

or Room 641A: https://en.wikipedia.org/wiki/Room_641A

If they can install hardware in companies and run data collection programs, I don't see why wouldn't they be directing the data collection policies too.


You're delusional if you think the landscape 10 years ago is the same now. Companies are much less likely to simply capitulate to governments now. In fact, the references you make here are why they're less likely to allow these kinds of things.

It looks really bad for them, and it affects their bottom line when these things come out. When public trust in your service is crucial to its existence, you can spend more on lawyers to fight the government about it.


Right, companies are extralegal vigilantes fighting against the oppressive governments together with the people.

Are you sure that you are not the delusional one by claiming that this(working with the governments) affects their bottom line? How much money the law abiding government friendly companies lost so far?


> The court orders show the government telling Google to provide the names, addresses, telephone numbers and user activity for all Google account users who accessed the YouTube videos between January 1 and January 8, 2023.

Interesting aside: Viacom used a similar broad request back in 2008 [1] in its lawsuit that nearly put YouTube out of business in its infancy. This time, it's the government making the request, and Google has way more data to potentially provide.

[1]: https://web.archive.org/web/20100702111029/http://afp.google...



Im increasingly coming to the opinion that anonymity isnt guaranteed so you should assume everyone knows what you do.m and who you are. So you should probably just use your real name and do way less online.

Havent fully swallowed this pill but its feeling inevitable.


We're on a tech forum known to have some of the best and brightest and visited by tech giants. If anyone can solve this problem, it is us. If we are the ones giving up, then who is there to make things right?

As I see it, our only choice is to make privacy and anonymity trivial. Not for techies, but for our tech illiterate grandparents. Push hard for tools like Signal where people can get encryption without having to think about encryption. People want privacy and security but they just don't know how or don't understand what leaks data. But there's the clear irony that the sector __we__ are critical to is the one who is creating this problem.

I'm not ready to swallow that pill. I'm unconvinced we have to. Clearly __we__ can do something about this. Even if that is refusing to build such things, let alone build defenses. Apathy is no different than supporting these authoritarian takeover, because that's what it is. Authoritarian creep.


    If anyone can solve this problem, it is us.
People on this forum (including myself) are the ones creating the tools that enabled this problem.

Any tech we create to "solve" this issue will be worked around and/or used to cause more problems.

Tech isn't the solution.


You're right that tech isn't the solution, but it also is. A hammer is part of the toolset to solving homelessness. It can also be used to create the homeless. We can build homes or tear them down. Hell, we can even smack someone on the side of the head with one.

Tech is too abstracted, and we must concentrate on the application. There is time for abstraction and time for specification. Tech is used to extract information as well as tech is used to protect information. These are actions, not objects or attributes.

And yes, it isn't the only tool in the toolbox. But it is a tool everyone here shares in common. It is a tool that many here are using to create this problem. One that many are probably not even aware that they are contributing to. But due to the commonality of our community and the commonality in its usage to create or exacerbate the problem, it is worth mentioning and considering.

Don't pass the buck. There are no singular causes nor solutions. So if we dismiss something because it is incomplete, we will never create any solution.


> If anyone can solve this problem, it is us

We've literally created this problem by making industrial-scale stalking profitable and socially-acceptable. We've created an entire self-sustaining industry that spies on everyone, is not accountable and that the government can just ask for data when needed.


Yes, but I don't think most people realized that they were doing it. Now we have a better idea. We can turn things around. You can just decide to not cut corners to do things fast and do things right. We talk left and right about enshitification, but let's be honest, it usually doesn't take significantly longer to do things the right way. In fact, I'd argue that generally you'll get things done quicker, but maybe not in the 2 week sprint timeframe.


Sounds like you're searching for a cultural or political evolution, not a technological one.

That's what influences the people who sign our paychecks.


Solving these issues won't make you much money, and anyone that gets close will invite more heat than the center of the sun. Better to divest. Keep an email and phone for essential services like banking but avoid all other activity.


Do you build tech for the money? It is not why I do it. Yes, I need to earn a living. But it is exactly that. What is necessary for living. What is the point of earning money if it is not to better our lives? Why is money the only way we can improve our lives?


> We're on a tech forum known to have some of the best and brightest and visited by tech giants. If anyone can solve this problem, it is us. If we are the ones giving up, then who is there to make things right?

You think the world’s geniuses are hanging out here? The world’s brightest are here and you’re going to inspire them to solve what you frame should be a very high priority? There are much bigger problems to solve.

I really think your vanity is warping your perspective.


The privacy of the world's populace sounds like a pretty big problem to me considering the damage that can be caused by that information getting into the wrong hands.


There are many extremely intelligent people who post here. There may not be Gandalf the Grey's here but there are dozens of tribal shamans.


> You think the world’s geniuses are hanging out here?

Maybe. But they at least frequent here.

> I really think your vanity is warping your perspective.

I think you undervalue yourself. I don't see myself as a big cog, but neither am I disillusioned to believe that just because I'm a cog in a much larger and more complex machine means that I have little to no importance. Lesser, but non-zero. Were I to have the vanity you suspect I have, I would not be calling for your support as I would use my ego to solve it alone. But I am not. I can't do this alone. Nor am I drumming up people to collect wood and assign tasks, but I am trying to help those find a longing for the endless immensity of the sea. I am trying to help us realize we aren't inconsequential and that together, we have meaningful power. The big cog may be shiny and may have a lot more power, but it is still supported by a thousand smaller ones.

I have no illusion that people here work for Google, Meta, Apple, Amazon, Microsoft, and so on. Do you really think differently?


> best and brightest and visited by tech giants. If anyone can solve this problem, it is us.

I'd say this egotistical god-complex is exactly what got us into the current mess.


"It is difficult to get a man to understand something when his salary depends upon his not understanding it." -- Upton Sinclair

I have no hope that the people who created the very tools that led to these problems, are in anyway going to try and solve this problem.


Any truly reliable privacy and anonymity tool that isn't created by the government will probably be made illegal by the government. Failing that, using it will make you a target of the government's security apparatus. If you create a cryptocurrency that can't be traced[0] or an anonymous marketplace where people can buy and sell anything they want[1], you're going to end up on the wrong end of US government trade sanctions or US drug laws. Running a Tor exit node gets your IP address blocked by much of the internet and can even get you a visit from the FBI[2]. Tor itself only exists because it was created by the US Navy as a tool for dissidents in dictatorships to be able to access the internet.

The only way to solve the problem would be to elect politicians who would either dismantle most of the surveillance system or address crime and terrorism so decisively that there was no longer any plausible threat to justify continuing to maintain a mass surveillance apparatus in which case it would (hopefully) eventually wither away as part of budget cuts once politicians forget why it was even "necessary" in the first place. There is no solution to political problems without obtaining and using political power to solve them.

The strategy of eliminating the system's justification isn't foolproof though because the bureaucracy that runs the military draft (Selective Service) somehow still exists even though the draft was ended around half a century ago and is almost certainly never coming back. Politicians only noticed it existed a few years ago long enough to debate whether to extend the wrong of registration for it to include women in addition to men. The eventual decision was to leave the status quo intact[3]. The sensible option of abolishing that relic of a past rights violation rather than continuing to waste money on maintaining the bureaucracy was not seriously considered. That means the direct route is almost certainly the better approach.

[0]: https://www.theregister.com/2022/08/10/github_tornado_cookie...

[1]: https://en.wikipedia.org/wiki/Silk_Road_(marketplace)

[2]: https://www.reddit.com/r/TOR/comments/rjgq8s/ok_so_what_has_...

[3]: https://www.politico.com/news/2021/12/06/ndaa-women-draft-dr...


>cryptocurrency that can't be traced

Monero is still going on strong, as far as I know.


Monero seems to be well on its way to being banned by governments:

https://www.binance.com/en/feed/post/3817825785186


I think it's all about how many clues you leave behind. If you make a HN account that you only access via Tor through a browser with Javascript turned off and stick your writing through some AI editing service, it's probably pretty difficult to trace anything back to you. If you stream yourself 16 hours a day every day, your nickname probably isn't saving you from much, as it only takes one person to go "oh I know them" and then your secret's out. So like everything, it's about a striking a balance. Who is out to get you, and how much do you like doing things online? Just a question you can ask yourself before you move into a cabin in the woods and work on your novel 24/7 or whatever. (Publish it under a pen name, though, obviously.)


Consider: if your adversary is the NSA, CIA, or (maybe) FBI, you’ve already lost the game you’re playing.


You would be surprised at how easily they can be thwarted by simple technical maneuvers.

YMMV, but ime a lot of people have this bogeyman caricature of who the feds really are. The reality is that these are government agencies that pay significantly below market rate for really intense, highly demanding work shrouded with multiple layers of government grade red tape.


I think it's not a bad idea to overestimate the power of the government to track you; the common wisdom on the internet to make this assumption is probably a good thing so people are motivated to be as safe as possible.

On the other hand, it seems like the Tor users who get caught make clear, glaring mistakes in their opsec. And I always remember how long it took to catch the Unabomber, and how they apparently only managed to catch him because of his brother.


The issue is they have time. Lots and lots of time. And they keep records.

So if you get high enough on the list, it’s like those ‘immortal snail’/snail assassin scenarios.

Even Bin Laden got taken out and dumped in the ocean eventually.

So like Jan 6th - it had better work, or your goose is very likely cooked eventually.


I think the biggest trick is to move around, so it isn't as simple as getting a single address. Like with Bin Laden, a lot of the work was figuring out where he was. And Ross Ulbricht, maybe he wouldn't have been caught so easily if he changed hosters occasionally and the VPN had listed 100 internet cafes in different cities as connecting IP addresses instead of just 1. Certainly that's the way Tor works, always hopping around routers. It's maybe a bit pointless though, once they get your legal name it's pretty much a matter of time.


Damn, the snail assassin analogy works a lot better than I expected!


That's no reason to make it easy on them. Their ability to do bulk surveillance is limited by resources. Don't lower their resource requirement.


They're not trying to get everyone.

They just have to make it painful enough for enough people to get the vast majority of the rest to "fly right."

I'm certain that this is not terrorism.


It entirely depends on how motivated and how much resources they're willing to dedicate to finding you. They're probably not going to go to great lengths to catch a single copyright violation, so simple precautions may be good enough.

If you're a legit threat to national security, then yeah, they're probably going to find you no matter what you do.


If you're looking for privacy from your current and possibly future employers, you can obtain that by using a pseudonym online and taking basic measures to make yourself hard to dox. If you want privacy from the US government, that's not going to work.

Also, getting doxxed isn't entirely bad because it can open doors as well as closing them. Depends on how you leverage it. You just don't want the US government and/or the government where you live as your adversary.


> If you make a HN account that you only access via Tor through a browser with Javascript turned off and stick your writing through some AI editing service, it's probably pretty difficult to trace anything back to you.

This is already too hard. But anything that can be done needs to be wrapped up into a trivial to use interface. It has to be for everyone, not just people who are technologically {capable,knowledgeable} and have the time and energy to do this all the time every time. It needs to be standard.

Of course, we should fight this from both ends. Many ends. We shouldn't collect the data. We shouldn't process it. And we should build defenses.


But by doing this (Tor etc), you've also potentially identified yourself as a person of interest who warrants further scrutiny. It begs the question: what are you trying to hide.


There's a crucial distinction here between the pragmatic and the normative, or else there's a feedback trap where accepting it as normal makes it even more common.

In other words you can plan around the worst case, but don't let go of the opinion/social-value that it's too-common and wrong and aberrant.


Talk to anyone in advanced privacy work or out of government -> full stop, yes, if you’re not doing Snowden-style measures (TailsOs) or really reconsidering where and how your phone travels around with you and browser controls, it’s done.

Tracking and the firms that do it is incredibly extensive and hard to beat (ie browser ad you just scroll by can fingerprint you well enough).


I think that's exactly what Google's ex-CEO said years ago:

> If you have something that you don’t want anyone to know, maybe you shouldn’t be doing it in the first place

which you can read either as a terrible "nothing to hide nothing to fear" comment or as a good warning about the factual state of things.


So one's privacy posture should be part of the complete security posture, and should ideally start at

"DEFAULT DENY ALL"

After which you can -of course- start opening up ports and start trusting people with information. Even if done imperfectly, one's attack surface is at least under some sort of control. I mean -at least- a semblance of control can be taken, however aspirational in practice. It allows conscious control of ones information flows.

As you may have experienced yourself a posture of "DEFAULT ALLOW ALL" is effectively impossible to manage, since tracking down and plugging new leaks faster than they show up is pretty much like bailing out a boat with -well- a squillion leaks (and more every minute).

Getting muggles to a safe default posture is going to be difficult. However, seeing the growing awareness in society it might not be impossible.

Think of nascent privacy initiatives by the EU (no matter how (in)effective as yet). Or you could think of starting school programs akin to "just say no" for instance, promoting more conscious and careful online behavior. It might never be perfect, but some level of herd resilience might be attainable?


What you say is indeed one possible way to deal with it.

Treat it as a public postcard signed with your name, and never for a minute assume that someone doesn't link what you say to your identity.

This mode of operating means you will be more polite when angered by some troll online, as you are not hiding behind some pseudonym.

And at least you won't be shocked when a Website does what Glasdoor recently did, i.e. convert from pseudonyms to people's real names WITHOUT CONSENT OR WARNING. Surely by using always your real name you will not bitch about your employer on a Website when you name is shown as the poster and you will still want to get promoted, or at least retained as an employee.


So the whole internet will become like LinkedIn. Let the horror begin.



And that's why these social media networks ruined the internet. Companies like Facebook forced "real identities". Now suffer everyone for giving in.


I'm also waiting for the day, which is pretty much here now, when you will have to use a real name for any sign up form on any website. Something verifiable and not John Smith at phone 123 456 7890.


I more or less do that. Not really related to privacy, but I find that if I post as myself, I am more honest, less likely to troll, more considerate of others when I post. For me it's healthier.


Black mirror is far too accurate and coming true far too soon.


Glassdoor is dox'ing people so...


What are you talking about? Use VPN outside of US jurisdiction, register a random google account, and use YouTube all you want in almost full anonymity.

Just keep in mind that if you write comments, and you also write under your real name, it's relatively easy to identify you by the writing style.


> you should probably just use your real name

Nonsense.


Ha ha.


Yes, agent Smith.


100%. We probably shouldn’t protest or even discuss non-conforming ideas. Just agree with the current rulers on all things to be safe. Also make sure to vote for the right leaders because who knows how long that’ll remain private.


Because of the possibility that the leaders will change - or change their opinions - in the future, the only safe course of action is to express no opinion whatsoever.


Your reading habits, tv show preferences etc already reveal your politic beliefs and they have already been categorised by advertisers like Google/Facebook/Apple/Microsoft and sold to countless data brokers and government agencies already.


You don't have to read, and you don't have to watch tv. However that also tells a lot about someone. There's no escape.


Can't do that either, "your silence speaks volumes", etc etc


I'm on to you sir... trying to fix reddit.


We also, apparently, shouldn't even try to discuss and figure out any other possible approaches or responses to any given problem that might exist.

We don't care if this wall might possibly be easy to simply walk around and obviate, we shouldn't even look, or even talk about looking. The only rational way to attack any problem is to just look exactly in the direction you were led to look, bang your head on that same spot forever.


You won't dare post anything actually transgressive.


Or go back to how things were: Keep it to yourself and discuss spicy topics among close friends. Friends assume good faith. People on the internet tend to assume the worst interpretation possible and don't give any benefit of the doubt.


Not sure if you're being downvoted because people think you're serious, or because they dislike sarcasm.


Anyone thinking I was serious is a canary in the privacy coal mine.

That being said, I suspect it was just an unfortunate use of words (current / right leaders) that might lead some people to think I was being politically tribal. (nothing could be further from the truth)


I get that it is sarcasm, but disagree with the implication that the OP was suggesting complete surrender.

If you don't like it, walk away, seems reasonable to me. We don't own these corporate web sites and can only vote with our eyeballs (so to speak).


I thought (still do) OP was being sarcastic too and I was just playing into it.


Ive seen sarcasm downvoted here before, its usually a literal crowd here on HN


A “literal crowd” sounds mildly pejorative. I think it’s more that HN prefers productive, rational discussions. Sarcasm is passive aggressive and a more circuitous route to the point than a literal one. Last, sarcasm isn’t usually even funny. When it is, it’s only funny to those who are with the point.


Critical feedback is still actionable. We don't need to guard adults from hearing difficult things and feeling difficult emotions.


That’s not what’s being suggested at all though. The problem with sarcasm is it’s boring but feels oh so clever to the person who writes it.


Let’s say block headed and pretty dumb.


Name calling is rarely useful. It certainly is not in this case, in addition to be patently false.


I downvoted because to me it tried to say what someone else should not talk about.

I don't disagree that wrong things should not be tolerated and that giving up and accepting is no answer.

Whenever someone tries to tell a complainer to shut up, I frequently point out that in the entire history of the Earth, not one thing ever got better by accepting things as they are. It's one of my favorite things to point out. So I'm very much in the reject giving up camp.

But I don't think it's necessarily giving up or cooperating to merely explore any and all other possible solutions to any given problem, and that comment struck me that way.

My impression might be unjust, and so by disclosing it I may take a few arrows myself, but for once, one is explained. :)


I mean do all that but don’t tweet it and leave your phone at home. That is the issue.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: