"Think of the ramifications in the future if everyone who searched something in the privacy of their own home was subject to interviews by federal agents," Spodek said. "Someone could be interested in how people die a certain way or how drug deals are done, and it could be misconstrued or used improperly."
The problem is most people don't think like that. Most people think "why do I care, I search google for sourdough recipes and to see what the heck sus means. The chances of me getting in trouble because I did this is really low." And then they read something like this from the same place...
"After investigators linked Williams to the arson through the keyword warrant, they sent Google another warrant specifically for his account, finding that he looked up phrases like "where can i buy a .50 custom machine gun," "witness intimidation" and "countries that don't have extradition with the United States."
And they'll say "See, see! This works, this should be legal it catches bad people, look at what this guy was doing"
Enough people think like that, and there's no pressure to make this practice illegal, no pressure from the public to get laws passed because while some people look at this practice and see their lives being made worse, many people look at this and see themselves being protected from bad guys.
> Most people think "why do I care, I search google for sourdough recipes and to see what the heck sus means. The chances of me getting in trouble because I did this is really low."
Unfortunately, socialcooling describes a lot of "hey maybe" and "what if" scenarios without a lot of concrete evidence, while on the other side of the debate, there's a likely arsonist under arrest.
This website is the kind of stuff we need to convince people about why privacy is very important.
"You may not get that dream job if your data suggests you're not a very positive person."
Ever since I had access to the Internet to downl- I mean stream songs, I was always afraid of people finding out what I was listening to and corporations building a psychological profile based on what I listen. Oh you listen to some Japanese songs, hmm could mean he's into hentai porn, could be a liability. Death metal? Oh repressed emotions, possibly suicide, not good long term. Rap music? Criminal.
Now this doesn't seem so far-stretched but with more facets of daily life being affected. The worse that could happen is them monitoring the life of the people who will grow up to become our future politicians, if it's not already taking place considering that most of most politicians lives is much more private compared to other people.
It seems like this only has an effect because people care about it for unexplained reason. If I wanted positive person to work with, wouldn't someone posing as one but in reality being negative, be effectively cheating?
I went through a few years of studying basement synthesis techniques for high explosives, how to make initiators and boosters and how shaped charges worked. Add this to my ambient fascination with CNC milling, rocketry, robotics, long-range shooting, control theory, RF, etc.
...and alone, it's not a problem. BUT, if the police want to go after you for something else, they will use these searches to get a no-knock search warrant, and things can go very very sideways for you after that.
That's what I was thinking. The poster claims s/he just likes knowing things. OK, well, try to prove that in a court of law with the police on the other side.
Which is fine. People who just like knowing things are extremely unlikely to have their cellphone location data correlate with the site of the arson at the time of the arson.
I'm now thankful my country doesn't have jury as a part of legal system, so this circumstancial evidence doesn't turn into plausibility lottery. Non-zero chances my unlucky location and something I searched for in past $number of years [gives me trouble] isn't something in my opinion acceptable.
Judges are better trained to know when their biases are, in fact, biases, and to set them aside as best as they can. Not all do that to the best of their ability, and there's residual bias even with those that do. But, statistically speaking, if you're facing charges in US, and you're innocent, it's better to go with a judge than with a jury. And conversely, if you actually did it, you have better chances convincing the jury that you didn't, or that you deserve a more lenient sentence.
Here's an article I found on the topic and it suggests what I thought -- that judges are more likely to find people guilty than juries. Whether you think that means that judges are too close to prosecutors and police or juries are letting guilty people walk is a matter of interpretation, I guess. But to the concerns of the parent poster, juries have a higher, not lower, standard for "beyond a reasonable doubt."
More importantly-- as the use of such warrants rises unimpeded, the likelihood that your search history/cellphone location data correlates with a crime also rises.
It's definitely a possibility. There's a solid line drawn right now between treating this data as circumstantial, but as useful to focus an investigation.
In the Golden State Killer case, the DNA correlations weren't admitted as evidence; they allowed investigators to narrow the search to an individual, where they then built a case using traditional methods.
As time goes on the number of people willing to light something on fire or indeed commit any pre planned crime while carrying their cell phone will drop to near zero.
People who on the overall aren't that bright already use the phone as a countermeasure to "prove" they were somewhere else.
Most cases wont be solved with correlated search data AND location data meaning we will start with a much larger pool of people that will almost exclusively be composed of innocent people whose lives can be ruined on mere suspicion.
I'm not sure precisely how the scenario you're describing differs from current police investigative techniques, where you can become a suspect for, for instance, walking past the wrong streetcorner when a crime was committed and being picked up on camera.
... which is just a digital version of being misidentified by the local corner store owner when the cops go around and ask if they saw anyone suspicious in the neighborhood.
Its simple. Imagine a bomb is set off in an indoor mall. Lets say for the sake of argument that no legitimate evidence exists.
The bomb maker wore gloves, walked up, wore a mask, didn't bring his phone. Even so the more tech we deploy the greater the number of hits we are going to get.
GPS on everyone else's phones will tell you the name of hundreds to thousands of people that passed by within a plausible scenario, license plat readers can get you even more, facial recognition more yet! We can pull in even more if we look for people that looked up anything vaguely related to bombs and the mall within the last month.
Spurious info from eyewitnesses and traditional police work certainly happen but technology can trivially produce 100x the volume regardless of whether that data includes the guilty party or not.
Furthermore once you have your mark guilty or not our present system is designed to trivially secure the convictions of the innocent by punishing them 10 times over if they insist on the actual process they are owed. Here have 10 years 5 years with good behavior or how about a life sentence and in order to have competent representation I hope you have 20-50k to pay or we will shovel you someone who will themselves pressure you to take the deal because they don't have the time to deal with the caseload they have now.
Wrongful conviction is a fact of life not a hypothetical. Some estimates are as high as 10%
In that context normalizing tech that shovels loads of low quality evidence that governments can use to increase their clearance rates by intimidating anyone who might be good for the crime into copping would seem to have predictable results.
If the goal of this hypothetical is to show how technological methods are a societal risk, I would suggest couching the hypothetical in something other than "a bomber that is otherwise un-findable." Technology making it possible to find them through data correlation---even if the technology also has some false-positive risk---seems like an argument in favor of these methods.
It's probably also worth noting that these technologies can be used on the opposite end of the story too. "but officer, I can't possibly have been at the site of the bombing, here's my alibi data trail" is also enabled by these mechanisms.
The false positive rate is the whole point of hypothesizing a mad bomber with stringent data practices.
If you’re relying on circumstantial web searches for “bomb” or “fertilizer” or “SubUrban Mall map” that correlate with cell phone records of who was there on that day, you’re pulling in farmers, and people who wanted bug bombs to kill spiders, and people who wanted to know how to make a Jaeger Bomb, and people who wanted to know the fastest way to get to Clothing, Ltd. in that mall, etc. And you’re totally missing the nutjob who printed out “bomb-making-101.pdf” a year ago and left his phone at home (or in a friend’s car) when he struck.
In an area with 100k nearby people, which is a reasonable search metric since we have cars and malls live in populous areas, if the false positive rate is 1% you now have 1000 people as suspects.
What’s your success rate on guessing 1/1000? What tools do you think law enforcement has to further narrow this down?
> What tools do you think law enforcement has to further narrow this down?
Further cross-correlation of more data. Have the farmers ever even been to the same town as where the bomb was detonated? Were they there that day? Do they have any apparent motive? What's their alibi? You still have to prove a criminal case beyond a reasonable doubt, so all the points you're raising about alternative explanations for such searches are exactly the things the defense would raise, and why police don't use this data collection as evidence, but to focus the search from 100,000 people to 1,000 people.
It's probably worth thinking about it that way: absent this data, police have to assume everyone at the mall might be a suspect, so the pool is 100,000 potential bombers, not 1,000. What's the success rate on guessing 1/100000?
In a world of ever-increasing "data noise," that's true regardless of whether the cops are using the data.
A person who has a credit card for five years, even with a few missed payments, has better credit than a person who's never had a credit card. The latter is an unknown, and unknowns are risky.
A person who was let go from their last job last month has a stronger résumé than someone who's last job was five years ago.
A person who reads Hacker News for five years but never comments on anything lacks the downvote privileges that a person who comments heavily for a few months has.
I feel like I've searched for sketchy stuff before and it's almost always in response to something on twitter or reddit where someone claims something that feels ridiculous and so I go on google to see if it's legit or not.
Call me crazy, but I think this is exactly how warrants should be done. Very specific in both time and place, and used to catch a criminal that did a very severe crime.
If you had been caught up, you 100 percent should have been scrutinized. Imagine if it was someone you loved a witness to a terrible crime and her car was set on fire. You'd definitely want the police to prove who it was.
I'd want worse than just having the police find the guy. That's one of the major purposes of having a centralized justice system; it protects society from our individual impulses that people who wrong us must be punished at any cost.
Google as a private corporation should not be collecting evidence for the police before a crime as occurred. Spying and surveillance of the general public should never be accepted as the norm.
I highly doubt the police could have traced this back to the criminal if he wasn't logged in and didn't use his normal IP, for example, if he had used in private browsing on a VPN it would have taken someone like the NSA to track him down.
Right, I'm suggesting that we generate push back as much as possible. One way to do that is to make future Google employees hesitate about supporting their unethical anti-privacy practices. Nerds have to stop giving Google a free pass.
> Enough people think like that, and there's no pressure to make this practice illegal, no pressure from the public to get laws passed because while some people look at this practice and see their lives being made worse, many people look at this and see themselves being protected from bad guys.
Why do you assume this will ever be put to popular vote? I mean you see this as if it's a choice from the people in a democratic republic and then your statement might be right or wrong.
But reality is: the boss of public prosecutor office realises that they need more power to get the numbers they're judged on: "bad guys" put away. Whether they're actually bad is not taken into account by the people evaluating them ... So what's this person to do?
And they're not even bad people, they certainly try to get the bad guys. And, sure, being vilified in the press for going after people is undesirable, but beyond that ... and of course this leads to them pinning everything on poor (ex-)criminals who don't really have a choice, and who are judged by investigators to be "bad people".
Especially in youth law this is bad. Because proof is not required to convict anyone (technically, after all, "they're not convicted", just incarcerated for years), they just catch the first "bad apple" near a crime, and if there is no crime they'll just make one up. Of course the net result is more criminals when these kids lose faith in society protecting them.
People do what they're paid to do. In the case of public prosecutors and investigators that's putting people (all kinds) behind bars.
Right. I dunno what they're to do. I also don't really know which side I come down on 100% of the time. Maybe this is an ok practice? It probably works to catch bad guys more often than not. But it's also likely to be abused often. I understand the arguments on both sides, and I tend to side with the privacy side rather than the other side. But I get the arguments on both sides.
Why should intellectual curiosity lead to a criminal investigation.
There have been plenty of times I've seen a term on the internet or heard one in a movie that I wasn't familiar with, and upon doing a search learned that it was unsavory. All this does is scare people into not thinking beyond what they already know.
Sorry but you're missing the point. What your parent poster is saying is that because most people don't care, there won't be any pressure to make this practice illegal. So I think you both agree that it is not good, but just explaining why they said most people don't care.
I worked in a bookstore in the '90s and there was a fantastic series of books, for authors, about how poisons worked, how it is like to die in certain situations, all kinds of dark stuff that was designed to help Crime Fiction writers help in areas they normally would not know enough about. Now imagine looking that stuff up on the internet for writing a book.
You sound like you might run a lot of Shadowrun games. I've run tabletop Shadowrun games, and I've never had a weirder search history than when those games are going.
"Hm the players are going into the bank and need to stop some bad guys, I need a map quick"
Google -> "<my bank> blueprint"
"Crap they want me to describe the gun the guy has but the book has no picture"
I googled "how to make chlorine gas" after a friend mixed ammonia and bleach products thinking it would keep their dog from peeing in the same spot, and wondering why their eyes and skin were on fire.
>And they'll say "See, see! This works, this should be legal it catches bad people, look at what this guy was doing"
You lost me here. Of course people are going to conclude that arresting a person that set fire to a car in order to intimidate a witness in a sexual abuse case is a good thing. And why shouldn't they? That sounds like a slam dunk good thing for society.
> arresting a person that set fire to a car in order to intimidate a witness in a sexual abuse case is a good thing
The little problem is all the people who might have Googled 'how to set fire to a car' but who didn't actually set fire to a car.
Even the basics of conditional probability are not well understood by many judges and jurors - heck, they're not even understood by many people on HN. Many people have been convicted of crimes they didn't commit. We know of many cases where people have been exonerated after years in jail. We don't even know how many people haven't been exonerated and are continuing to rot in jail for crimes they didn't commit.
>The little problem is all the people who might have Googled 'how to set fire to a car' but who didn't actually set fire to a car.
That's not what they did. The got a warrant for those that searched the specific address around the specific time the crime occured. Then tracked the phone to the crime scene. After that level of identification then they subpoenaed the searches for that person.
Precisely. This is similar to the case of the catching of the Golden State Killer, where they used DNA correlation to find a possible suspect but that narrowed the focus of investigation; it wasn't the evidence brought to convict.
I think it's probably acceptable for society to grant large leeway for information that focuses an investigation, while keeping the set of acceptable evidence for conviction narrow.
The US Constitution doesn't just protect against being wrongly convicted. It protects against unreasonable search & seizure - this protection applies to everyone, and is wholly separate from any protections attached to criminal investigations.
This sort of US is much wider than the US. Any government can request this information with a warrant. Restricting it to US ideas narrows the conversation.
This is, actually, an anti-feature for some people.
People don't like things they don't like to be effective, because they'd like to be able to argue on that basis as well.
"We shouldn't torture people because it is cruel and a violation of their human rights" is a purely moral argument; "We shouldn't torture people because it is cruel and a violation of their human rights, and it isn't even effective at extracting useful information or true confessions, study after study has shown blah blah blah" is a moral argument with an effectiveness chaser thrown in. Even if you don't share the same moral framework as me, I can still possibly persuade you to act in a way I see as moral because acting immorally doesn't gain you anything but wanton cruelty.
People love to do this for all sorts of arguments, from taxes to environmental or fiscal policies to their monopoly house rules.
Granted, most of the time people also just pull their effectiveness claims out of their ass, but it is a very effective rhetorical technique.
> That sounds like a slam dunk good thing for society.
That particular application is a good thing. But the existence of such capabilities can be a net negative, and by a huge margin.
Crime is something like a "cost of doing business" of a free society. Take all the crime away with an all-seeing Eye of Sauron and you risk a technological dictatorship that opens death camps for the currently unpopular race/class and crushes all resistance.
How can you say that? Do you believe Williams is guilty? Has he been found guilty? Even still, suppose you KNEW he was guilty - would you be okay with the government doing /whatever it took/ to find the evidence needed to prove his guilt?
Not to beat a dead horse, but to crystallize my point..it's not just about waiting until we find out if the guy is guilty or not to make a judgment. "Catching criminals" is an outcome/end. It's not really an action we can take. It's a future state that may or may not come into existence after we undertake certain means. "This particular application" is a means to an end. That is a key difference. I'm questioning whether the means always justify the ends.
>But the existence of such capabilities can be a net negative
And that's the problem with the pro-privacy side. Their argument relies on the idea that it can be a net negative but provides little evidence that it is a net negative. They argue theoretic harm against actual real world benefit. When that's not a winning argument they lament the existence of the real world benefit instead of re-evaluating their position.
> Their argument relies on the idea that it can be a net negative but provides little evidence that it is a net negative.
That's the problem with the majority of these cases, they involve testing the limits of what the government is allowed to do by defending the rights of really bad people whom most people have zero sympathy for.
I don't really expect the DEA to serve a no-knock warrant on my residence because Amazon currently thinks I'm a pot grower (due to buying a bluetooth thermometer which is apparently popular with actual pot growers) but if the government had All The Data I'm actually a pretty low risk for a bad bust once you input it all into the PreCrime2000™ algorithm.
I personally feel its the responsibility of the people who wish to erode our civil rights to prove the net benefit far outweighs the protections currently we enjoy as a direct result of British colonial abuses -- which, IMHO, is all the argument you need since the Bill of Rights wasn't created out of thin air for some idealized perfect society to strive for.
> provides little evidence that it is a net negative.
The negatives of a state that becomes powerful and evil have been thoroughly demonstrated in, for example, the USSR, Cambodia, North Korea and many other times/places.
>the study, published in Proceedings of the National Academy of Sciences determined that at least 4% of people on death row were and are likely innocent.
Keep in mind that death penalty cases get a disproportionate level of attention and process and ought to be the least erroneous.
In many lesser cases people are threatened with sentences that are many times more punitive than the average perpetrator receives in order to inspire them to take a plea deal for a sentence that is merely life ruining instead of life ending.
Wrongful convictions in these scenarios are liable to be much higher than the 4% innocent we murder in public.
In America some jurisdictions are corrupt to a legendary degree. Recent years saw footage of both one corrupt cop staging a scene after murdering a citizen and another literally caught on his own body cam planting drugs during traffic stops.
Countless other examples abound and I omit them only for brevity.
Tech like facial recognition, geo fence warrants, and fishing expeditions that begin with search queries are inherently designed to cast a broad net that criminals will increasingly avoid to the degree possible by you know using VPNs, not googling stuff you plan to set on fire, not googling how to whack your wife etc, not bringing your phone to the crime. Whereas criminals will do the minimal work required to avoid scrutiny it will continue to pull in normal people who will be threatened with decades in prison in order to steal mere years of their lives.
Logically exploratory usage of new techniques is liable to be carefully considered to establish useful precedents. Pretending that American cops would plant drugs but wont imprison black folks that happened to walk down an adjacent street in the 3 hour window where we think the rape happened seems disingenuous.
Its like looking at 50s cars through currently informed eyes and saying we need to see twisted bloody metal before we prove that seat belts are necessary.
Something else that no one has touched on is the asshole dilemma. Something that cops already deal with - sometimes there are just assholes that piss off a lot of people. And because of those assholes there ends up being a lot of people searching for them. Trying to find where they live so they can throw eggs at their car or whatever. Most people don't actually throw eggs at said car.
But hey, this person is pissing off a lot of people, what are the chances that someone does actually murder them? Well, higher than non-assholes. And now you have a murder investigation that may include adding suspects that just fucking hate the guy, but had in no way murdered them. But Google Searching for their address the day before may tell the cops opportunity, and motive isn't hard to get shortly after. That kind of association is what cops will take to court and likely wrongly convict someone on.
Google effectively sells exactly this kind of information to commercial clients openly. That bothers me more than the information being used by police for public safety. Both are bad, but that I can directly market my snake oil to a 90 year old granny with dementia to loot her paltry social security check courtesy of Alphabet seems worse.
I'd say a majority of people are fascinated by crime.
Exactly. I’ve googled stuff after seeing it on CSI or NCIS. Or on the news, or in a movie, or whatever. You could definitely cherry-pick stuff out of my google search history to make out like I’m a big time terrorist criminal mastermind.
I've long ago assumed anything I type into a Web site text box could _potentially_ be pulled out of context and used against me in court. Constricting to have that voice in your head but that's just the reality.
Wasn't there that time right after the Boston bombings where different members of a family had searched for things like 'pressure cooker' and 'backpack' and then got an FBI visit on terrorism suspicions?
While everything you said is true, the constitution is suppose to be what protects the individual from this kind of mob thinking.
General Warrants have been unconstitutional since the founding of this nation, only through some very bad "3rd party rule" exemptions to many constitutional amendments is this even thought about as acceptable.
3rd Party Doctrine is unconstitutional but it's got nothing to do with cases where there is a warrant. warrant overrides expectations of privacy. Searching a computer database for activity directly matching a crime's attributes is not a general warrant.
>The problem is most people don't think like that.
But what is the problem here anyway? If there's a crime at a location, police detectives will investigate anyone that has any connection to that location. That someone may have searched for that location is certainly relevant to the investigation, but it doesn't imply that those individuals are automatically a suspect. In fact, they will be quickly ruled out.
The police are under immense pressure to put somebody away or else they're basically telling the public "we have no idea who did this crime". They need to find and convict a suspect to instill confidence from the public that they're doing a good job, in order to keep their jobs and their funding. This doesn't mean they need to find the actual criminal, only that they need to appear that they did.
The more information you have about a person, the more you can present to a jury selective facts about them that make them look guilty. And with enough circumstantial evidence you could make quite a lot of innocent people look guilty, despite having no causal evidence whatsoever. Google is providing circumstantial evidence, not causal.
> The police are under immense pressure to put somebody away or else they're basically telling the public "we have no idea who did this crime". They need to find and convict a suspect to instill confidence from the public that they're doing a good job, in order to keep their jobs and their funding. This doesn't mean they need to find the actual criminal, only that they need to appear that they did.
they don't even have to put the person away necessarily. an important metric used to evaluate police departments is the clearance rate, which is just the fraction of crimes where they found someone to charge. whether or not the charges stick or get thrown out entirely does not affect this number, and would more typically be used to evaluate the state/district attorney's office.
so really all the police need to look good is to collect the bare minimum amount of evidence to charge someone in a case and then move on. if the conviction rate ends up being low or a bunch of charges get thrown out, they can just say the prosecutors are lazy/incompetent. at the same time, the prosecutors can point the finger back at the police and say they throw out so many charges because the police don't collect evidence properly. as long as they drop the weak cases, plea out the middling ones, and take a few slam-dunk cases to court, their numbers will look okay too. all the links in the chain only get examined together in the few high-profile cases that linger in the news.
>They need to find and convict a suspect to instill confidence from the public that they're doing a good job, in order to keep their jobs and their funding. This doesn't mean they need to find the actual criminal, only that they need to appear that they did.
There are always instances of bad process and bad outcomes, but in general that's not how the justice system works. I'm sorry for your cynicism.
The justice system is run by humans, it will make mistakes ... so I'm not sure what you're proving by pointing out the tragic stories of individuals who were failed by the system. You will never design a system to 100% prevent innocent people from being punished, or guilty from being acquitted.
The big picture is this: America is a huge country of 320 million with thousands of people per day moving through the justice system. The vast majority are sentenced/acquitted based on the crimes they did or did not (respectively) commit.
It's not cynicism, it's Goodhart's law applied to actual law enforcement.
There are good police units who have it in their culture of not bringing a suspect to trial unless they have substantial evidence, and presuming innocence until guilt is proven. There are also bad ones who will take a person to court because it fits a description and allows them to close a case. Whichever path they choose, police won't be held accountable for taking shortcuts except in egregious, high profile cases.
>It's not cynicism, it's Goodhart's law applied to actual law enforcement.
What OP pointed out was a potential problem that could be rectified. I don't get a sense of the scale of this problem and I don't see any evidence that what OP specified is an intrinsically large-scale structural problem that leads to innocent people being incarcerated in significant numbers.
>There are also bad ones who will take a person to court because it fits a description and allows them to close a case.
America is a big country of 320 million, with 700,000 police officers, and thousands of people moving through the justice system every single day. I'm sure there are plenty of examples of the process breaking down. That says nothing about whether or not this is a systemic, structural problem. And even if it was, it could be adjusted by policy and regulatory changes.
>Whichever path they choose, police won't be held accountable for taking shortcuts except in egregious, high profile cases.
This kind of argumentation is so annoying. You're clearly talking out of ignorance based on your own interpretation of how policing is done (which stems from movies and twitter). What do you mean police is not held accountable? Police departments are highly regulated. Every single aspect of policing has a process and regulation attached to it, and is coupled with enforcement and oversight from independent bodies.
> You're clearly talking out of ignorance based on your own interpretation of how policing is done
I am someone who knows someone who was convicted over purely circumstantial evidence, and I am only making the claim that we've built a system that leads to wrongful convictions in the face of incomplete evidence. The sibling comment gave 60 cases that would back this claim up, and you responded with "well that's just going to happen by law of large numbers." It's very easy to say it just happens when it hasn't happened to you.
Mostly there are zero suspects. These subpoenas are, as you point out, ways to find more suspects. There's just one issue: they also often only turn up ONE suspect. And here's the thing with people and police: everybody lies. The victim of the crime lies. Suspects lie (but the thing is: suspects lie whether or not they've committed a crime, this is not a difference between innocent and guilty). Witnesses lie (don't want to get caught, and yes the police encourages this, at wrong parking. Police will encourage them to lie, because they can't ignore a confession of wrongful parking and they understand they'll lose cooperation if they write them a ticket for their trouble). And of course the police officers themselves lie.
Whatever hope you have of finding the underlying truth in such a haystack less than a few years of work ... forget it. And then comes Google with "concrete evidence". Something that is actually true. Very likely not related to the crime, but true.
So you can probably understand that this would introduce quite a big bias against a "suspect", when their real connection to the crime is "someone was shot, and this guy googled 'gun' 2 days before". Furthermore, this gives them something to put to a judge or a jury, when before they had nothing, or nothing coherent.
Do you really think at this point a police officer, judged by the number of cases solved, bad guys put away, just walks away without at least applying very strong pressure on whoever was fingered ?
Police have an obligation (and so does everyone else) to report any crime they're aware of.
But the same goes stronger for police officers: they cannot have a court document (like a witness statement) they know of that contains crimes and then not report those. Any such document kills the "I didn't know" defense. And parking violations are crimes. If they have a DA that doesn't like them ...
So yes they do. In theory they risk their job if they don't. Now I'm not stupid. No precinct commissioner will fire an agent for this. However they will strongly discourage making a habit out of that. And if there is even a slight appearance that they're neglecting to report it because they're friends or family of the "perpetrators", then it becomes a LOT more serious.
The same goes for private individuals by the way, except your job is safe (unless it requires strict no convictions, like healthcare positions). However, this is still one of the reasons for you tell the police "I don't know. I haven't seen a thing", because if you had seen it, didn't report it of your own accord and they don't like you, you'll be paying a fine.
>...Enough people think like that, and there's no pressure to make this practice illegal, no pressure from the public to get laws passed because while some people look at this practice and see their lives being made worse, many people look at this and see themselves being protected from bad guys.
Exactly, the problem is the people, not the system. I'm pretty sure such nonsense would never happen in a place with strong privacy protections like Germany. But the US is an authoritarian country, so the default stance is to always give more power to law enforcement.
> I'm pretty sure such nonsense would never happen in a place with strong privacy protections like Germany.
German government ministers announced plans to require companies like Facebook and YouTube to report certain forms of hate speech to the police and to provide the users’ IP addresses. Firms already have to delete such posts. -- https://apnews.com/article/ef38dbeb3c0d026e65f5dde6edeb3837
> I would argue the issue here is that such a warrant was granted in the first place.
I agree that the warrant itself is a huge problem, but I think there's another significant issue here. Google may have just been complying with the law, but that doesn't meant they had to be quiet about it. Unless they had a gag order, they should have publicly stated that the US government was compelling them to hand over search records.
From the article:
> Google declined to disclose how many keyword warrants it's received in the last three years.
They did not say anything about a gag order and, given Google's track record on privacy, I'm not inclined to give them the benefit of the doubt here.
Path of least risk and resistance is for them to declare it internally to be kept quiet. An army of internal counsel would be telling everyone in the room to keep their mouth shut to prevent everything from PR issues to actual legal threats. They don't even need a gag order, they'll do it willingly.
Why does google keep IPs for searches? Are they required by law , and if so , why? It's not even needed for their advertising profiles. It would save them the trouble if the data was just not there
Fraud detection, attack mitigation, system health metrics---Google is a large enough surface (and target) that it has need to collect and collate terabytes of data per day to maintain the health of the system.
The mechanisms it uses for attack mitigation alone have to be smart enough to recognize a botnet and drop threat traffic with a scalpel instead of a sledgehammer.
IPs are legitimate for minutes, maybe hours, but not days or weeks if it's about attack mitigation and fraud detection. Size isn't a factor in that, I believe.
Sure, but in that sense ("it might help defend against an attack") you could argue that Google should also be allowed to keep & use all interactions listened to by Google Assistant, Google Nest etc. They may very well help.
Well, they can only do as much as we let them. Sadly, they can spy on the world and nobody cares, because privacy matters don't generate a strong push-back. OTOH, If they had an exec who so much as commented on a woman programmer's looks, he'd be immediately fired after a twitter outrage.
Not that I don't agree with your main point about warrants being too easy, but wouldn't a "rubber stamp" equilibrium be inevitable in any fair system? Eventually the institutional knowledge builds up to have a good idea what requests will be granted before making them.
While that may be true, we have clear and repeated evidience that the warrant process is not a "rubber stamp" because the police are just that good, but instead because judges have stopped reading or even caring about the constitutionality of the warrants they are issuing.
Instead putting the burden on the defendant to challenge them in court at some time later. Which given the state of Plea bargain's, public defense and other factors almost never happens
Sometimes I agree. Although I do think that they still serve a purpose. The warrant itself is some evidence of some of what the government is up to, that's important IMO / has value.
If the basis of the warrant is shown to be BS, that has consequences, not always a sure outcome, but it does happen.
I don't think it's helpful for this article to equate searching for a street address and searching for a keyword.
One is a lot narrower than the other in the sense that the data which Google delivers to law enforcement is a much shorter list.
If the government makes them hand over data about everyone who searched for a keyword, that's more prone to being dragnet surveillance. If the government asks who search for a specific street address, they may have learned about that address through an investigation of a specific crime.
Of course, you're free to oppose both, but to me the reasoning to oppose one wouldn't necessarily apply to the other. They are different types of queries.
Is there a legal or internal Google policy basis for the distinction that you're setting up?
Otherwise, all this indicates is that, in this specific case, Google gave information about an address, but you have no idea that these "keyword warrants" are limited to the "narrow" conception that you put forth.
In fact, they explicitly reference a case where this keyword warrant was not used to look for an address, but rather a different search query.
>Is there a legal or internal Google policy basis for the distinction that you're setting up?
I understand OP to be distinguishing between "overbroad" and "not overbroad" warrants. The legal basis for that distinction is the particularity requirement in the 4th Amendment[0]. Overbreadth is a common point of contention that gets warrants denied/challenged when they probably cover a lot more targets than just the suspected party, for example, in this high-profile geofence search case[1].
An example, if you can prove "the perp lives in this apartment," it's okay to search it. If you can only prove "the perp lives in this apartment building", you can't get a warrant to search the whole building, because you'll definitely wind up searching a bunch of innocent people.
The different search query you're referencing was for a specific name during a 38-day period[2]. A man of that name had been defrauded by a person using a fake passport with an image that appeared in Google's image search results for that name (but not Bing or Yahoo's, meaning it was likely obtained from Google). That, like the address, should yield a smallish number of total searcher (depending on the name/address).
OP seems to believe that warranted, specific searches aren't as unchecked as "giving data to police based on search keywords" seems to suggest. I'd tend to agree. Given the evidence presented, you could write a similar article saying "Bank of America is giving banking information to police based on Walmart purchases, court docs show".
Back when I worked for a county government I talked a fair bit with the sheriffs office IT folks. There had been a theft of a $1000 bill from a safe at a residence.
They asked from google for a list of all recent searches for "$1000 bill" for the area and received it. Unfortunately the report was enormous and not useful: apparently some TV show (one of those antique roadshow or storage unit shows) had recently featured one.
If you're as intellectually curious as me, ... perhaps it's better to not think too much about what your search strings look like without context.
It seems to me that keeping a local copy of Wikipedia is becoming increasingly critical for being able to have private thoughts in a world deluged with information.
While you're at it, to help with this specific case, you could also get a local copy of OpenStreetMap. That way you can look up and navigate to addresses without touching the internet.
What I found fascinating is that Google is digitally signing the data given to authorities. This is more than any free or paid user can obtain.
You have a green padlock in your URL but can you prove in court you received a given email? No, because a SSL connection is transient and you can't replay it to show that Google's certificate digitally signed that email in GMail.
> a SSL connection is transient and you can't replay it to show that Google's certificate digitally signed that email in GMail.
Actually, there the TLSNotary[1] protocol that allows you to use the https connection as a means to sign the web content your browser received.
There is the PageSigner browser extension that uses TLSNotary to sign webpages.
However, it seems like this project wasn't given a lot of love this last few years.
Good news is version 2.0 has been released just a week ago[2], with support for TLS 1.2, but with a major drawback for me: it now trusts a server generating the TLS keys for the notarized page. Sure, it's an "oracle" server not controlled by PageSigner but still operated by Amazon.
> can you prove in court you received a given email? No, because a SSL connection is transient and you can't replay it to show that Google's certificate digitally signed that email in GMail.
A TLS session replay is unnecessary. If you've got the email with its headers intact then you can prove that you received that email.
You might want to learn what email headers are available to you. You'd be interested in learning about ARC-Seal, ARC-Message-Signature, ARC-Authentication-Results, and DKIM-Signature. Those headers will let you cryptographically validate that a message is authentic -- that the email is unaltered as Google received it from whoever sent it.
You'll also be interested in learning about Key Transparency: the signatures are created using Google's PKI which should be listed in any public key transparency log. That will let you prove that the signatures you validated were created using something that only Google knows.
> can you prove in court you received a given email
Actually, you can get DKIM pubkey from DNS, and verify the e-mail. My mail server does that upon reception, and appends an extra header with validation results to the message.
> can you prove in court you received a given email
You can subpoena Google for the records. It would probably be easier to rely on that in court than cryptographic proof, as the legal system understand that much better.
I've made a lot of seemingly suspicious google chats/emails though completely facetiously, like when I dropped off some DVD's at a friend's house near Golden Gate Park, I sent him a Hangout's message like:
"I made the drop at the agreed upon location near Golden Gate Park, Please send me the money, small unmarked bills only"
I've often joked about "I hope the police aren't reading this", and it turns out that they might be... thanks to Google.
I see what you are doing. Now that you know that Google keeps proofs of your past criminal activities, you're attempting to create a plausible story that would explain communication with your accomplices as a 'joke'. Nice try.
Makes me think of my own duckduckgo usage. I wonder if Google is the only search engine being leaned on, or if they're just the only one with any publicity around the shared data.
Unfortunately DDG doesn't publish any law enforcement request data :/ They say they have nothing to give, but they don't actually publish any info abount what kind of requests they're receiving and what they're giving in return.
DDG is based in the US. It's under the same laws as Google and has to honor search warrants from the police and has to comply with laws (including FBI's NSLs).
I expect that DDG would make a huge stink about being compelled to do any such thing (Google, too. If people think the government is watching their internet searches, they'll go somewhere else.).
Congress shall make no law respecting an establishment of religion, or prohibiting the free exercise thereof; or abridging the freedom of speech, or of the press; or the right of the people peaceably to assemble, and to petition the Government for a redress of grievances.
Unless the government forces them to save user identifying information, it's sort of pointless to ask for anything. But in France several bar owners were arrested for not complying with a law requiring they track every user if their free wifi.
These documents are making me begin the switch to ddg today. Google should never allow this mass disclosure of data. I don't trust it anymore and won't with my email either.
> DDG doesn't record search information in the first place.
That's not entirely true. DDG saves user searches. They don't save identifying information (IP, UA, etc.) along with the user search.
The below is directly from their privacy policy [0]:
"We also save searches, but again, not in a personally identifiable way, as we do not store IP addresses or unique User agent strings. We use aggregate, non-personal search data to improve things like misspellings."
Not an acceptable practice under the Constitution of course it will require court cases to stop or narrow these. Geolocation searches (being in the wrong place/wrong time) are bad enough, but everyone searching for "X" could capture thousands or even millions of people.
I think it's a little unclear whether it's Unconstitutional.
Google's logs of what users did on its service are Google's property, not proeprty of the users. It's like asserting the cops can't issue a warrant to your bank to look for transactions to Capone Real Estate Holdings, Inc... Of course they can.
So assuming that your argument holds, warrants mean nothing when private companies own everything and you don’t. Doesn’t sound like what was intended when the constitution was written.
I don't think the people who downvote this comment have really thought about whether they own their phone, their computer, their printer, their ebooks, their car, their software... We have come to an era where you "buy" things but then can't repair them, can't modify them, ant even can't use them anymore once the maker decides so. Most modern machines are actually leased, not bought.
Why should it be this way though? Under GDPR, citizens have certain rights over their own data, which is meant to protect their privacy. Shouldn't we have something similar here?
In these cases, do they actually have evidence a search engine query was made? How do they decide which search engines to subpoena? Do they just subpoena multiple search engines?
To search your house for evidence, they must have reliable information showing probable cause of evidence specifically in your house, and not somewhere else. If they don't know where the evidence is, or what it is, they aren't allowed to do the search.
But in the arson case, do they have evidence the suspect used Google in the course of the crime? If not, how do they have probable cause for the warrant?
Liberal democracies have forgotten they are liberal . They are democracies with a market and an army. There were monarchies in the past with better individual protections.
The way the word 'liberal' is commonly [mis]used in America makes me think many people don't even know what it means. I've even had people look at me weird when using it in a non-political context ("I'd like a liberal serving of mashed potatoes please.")
I'm really surprised no one has mentioned the short story by Cory Doctorow "Scroogled" [0] (Not to be confused with the MS ad campaign). It's about US officials using what ads you were served to vet you, not too far off from using your search results.
I was about to mention it. Although it's a bit dated I strongly recommend any one in software to read that. It's a quick read and enjoyable. I think the title is a nod to tireless privacy advocate Daniel Brandt, too.
I guess this is what happens when a single company has a monopoly over the search engine market. This couldn't have happened if law enforcement had to request information from say, hundreds of search engines in order to carry out keyword warrants.
It’s interesting to try to unpack why this “monopoly” bothers technologists a lot less than, say, one of a half dozen vertically integrated digital stores for gaming and app devices such as Switches, Xboxes, and iPads.
Proportionality, legitimacy, process and oversight.
If the FBI thinks someone is making a nuclear bomb in some postal code, there should be a process for helping to sort that out, because the risks are existential.
For other things, it's more tricky.
For smaller issues it shouldn't be in the toolbox.
There's nothing remotely unconstitutional about search warrants, also, the way we would foil a situation of 'existential concern' would be whatever is most likely to solve the problem.
ANAL but “give me all IPs that searched for ‘isotope separation’ in certain zipcode” is not in fact a constitutional search warrant (or at least shouldn’t be in a non totalitarian state)
If there is a legitimate threat of someone making nuclear weapons / WMDs, then there is 100% legitimacy for this kind of request - as for the constitutionality, under EU/US/Canada Constitution, it would pass, hands down, without question.
The reason I use the 'nuclear weapon / WMD' argument is because it's fairly extreme, and it's to get people to snap out of the default anti-government/dystopian mindsets to remind us what the law is effectively there for.
While such situations are rare, they do exist, and they demonstrate the length at which we have to go to keep civic order - and very legitimately so.
So it would be 'authoritarian' to conduct such searches when looking for someone not paying their taxes, and definitely unlawful for politicians using this on their opponents.
But it would be 'practical and lawful' in this scenario.
But it's all moot - in reality, if there were a 'real threat' - the National Guard, Police, FBI etc. would be doing full searches of everyone's home, block by block, helicopters, dogs, SWAT Teams, Firemen, the whole thing. There wouldn't even be popular outrage, the ACLU wouldn't be putting up a fuss, everyone would be at home, in fear, watching CNN, until the 'found the guy'.
It's basically false to suggest that warrants or information can't be used legally in pursuit of a crime - it's a knee-jerk reaction I feel by the anarcho/privacy ethos common here on HN.
It entirely depends on the situation - risk, proportionately, oversight, tactics of execution etc..
And it's never perfectly clear, that's why civilization is hard.
> While such situations are rare, they do exist, and they demonstrate the length at which we have to go to keep civic order - and very legitimately so.
No, they don't. Such situations are so rare that society can tolerate their once-in-a-lifetime occurrence.
A rule that applies 100% of the time is much more than 1% better than a rule that applies 99% of the time. Don't open the door even a crack, because someone will get their foot in and add more exceptions, exemptions, and caveats. And as I've said before, exceptions are a leading indicator of corruption.
(Also as other comments point out, there exist plenty of effective policing techniques to solve even these hypothetically dire situations without requiring mass surveillance.)
If the FBI thinks someone is making a nuclear bomb, they should have much better evidence than Google searches. Nuclear weapons are fascinating, and I'm sure that there are plenty of people all over the country (myself included) who have dug into how they work even though they aren't going to build one.
So 1) It's not always about 'evidence' they may be trying to find someone 2) they may need 'evidence' to provide more aggressive warrants, or possibly obtain the right to make an arrest while they don't have the final evidence and 3) court cases are hard - you need to have everything lined up. Raiding the garage they find materials, someone could can say 'It wasn't me' but when they show searches being made from the Google account ... then that's evidence relating the materials to intent etc..
It's not rocket science to make some rules so we don't end up in dystopia and at the same time can get information we really need. We just have to be smart about it.
Edit: case and point, literally as we speak [1] plot to kidnap/overthrow Michigan Governor by armed quasi-militia. There's no doubt technology is used in the thwarting of this. "The FBI became aware of the scheme ... in early 2020 through a social media".
A vpn at best just increases the costs for multi-country collaboration, but doesn't prevent it. This is the federal government, not some local municipality.
Even your favorite swiss bunker vpn can be subpoena’d or tapped extrajudicially and you would never know
I think that's wrong on a couple counts. First, the main concern with this type of collection based on search terms is that it has a high likelihood of picking up users who are totally innocent. If many of those users are obscured by VPNs, that raises the bar considerably for law enforcement to de-anonymize each of them (most, if not all, of whom LE knows will be total dead ends).
Second, concerning a VPN being tapped by US law enforcement agencies, I think depending on the country in which it's being operated, that is more or less likely. These companies' entire businesses rely on the perception of security, so government interception of their communications would be an existential threat to them. I think it's also likely that interception (and crucially, use of intercepted materials as evidence in criminal prosecution) would probably get out sooner or later, just as it has in the above story. The idea that federal law enforcement has access to everyone's communications everywhere all the time probably overstates their capabilities.
As I understand it, there still remains a pretty big difference between the government's active/targeted vs passive/dragnet surveillance. A VPN is probably still useful for defending against the latter.
Everything you wrote requires ignoring the entire last decade.
But fine, for an illustrative example lets use this article as an example. The government asked Google for IP addresses, then the government did an additional thing to determine the phone number was tied to the person they indicted, whether that was due to a Google account or requiring them to go to the cellular provider.
Nobody in this thread is arguing about the false positives snatching innocent people, just the VPN part.
With a VPN the same government would have gone to Google, then to the VPN company, then to the cellular provider.
VPN didn't help the customer, just slowed the government down.
You also wouldn't know if passive surveillance was being done on the VPN provider, which has been shown to happen to ISPs before from the Snowden leaks 7 whole years ago, which was about things that were even older. The NSA and FBI used National Security Letters to directly tap the servers indefinitely, and the CIA would just pay companies to directly tap the servers. They can be more easily coervice domestically, but other countries, even countries without automatic cooperation treaties are eager to help US investigators and have.
You wouldn't know if Proton/ExpressVPN/Nord you name it were passively sharing data, if they began keeping logs if at one point they say they didn't, if you were the active target, etc.
> But a recently unsealed court document found that investigators can request such data in reverse order by asking Google to disclose everyone who searched a keyword rather than for information on a known suspect.
"Just let us scour through the data to see who committed a crime."
How is this not a fishing expedition? Does the 4th amendment not mean anything anymore in the US? Are judges still this clueless about such obfuscated requests from the police?
It smells a bit analogous to pulling a street-corner security camera's feeds and looking at everyone who passed through an intersection to correlate with other data the cops have.
The problem is most people don't think like that. Most people think "why do I care, I search google for sourdough recipes and to see what the heck sus means. The chances of me getting in trouble because I did this is really low." And then they read something like this from the same place...
"After investigators linked Williams to the arson through the keyword warrant, they sent Google another warrant specifically for his account, finding that he looked up phrases like "where can i buy a .50 custom machine gun," "witness intimidation" and "countries that don't have extradition with the United States."
And they'll say "See, see! This works, this should be legal it catches bad people, look at what this guy was doing"
Enough people think like that, and there's no pressure to make this practice illegal, no pressure from the public to get laws passed because while some people look at this practice and see their lives being made worse, many people look at this and see themselves being protected from bad guys.