>will give the next generation of smartphones 30-centimeter accuracy instead of today’s 5-meters
Didn't the US scramble the civilian signals specifically so it wouldn't be too accurate? And they're just okay with civilians having access to ultra accurate GPS receivers?
> In May 2000, at the direction of President Bill Clinton, the U.S. government ended its use of Selective Availability in order to make GPS more responsive to civil and commercial users worldwide.
> The United States has no intent to ever use Selective Availability again.
The military has better precision, only because they buy better hardware. Similar hardware is used in commercial drones to improve GPS accuracy.
> Using two GPS frequencies improves accuracy by correcting signal distortions caused by Earth's atmosphere. Dual-frequency GPS equipment is commercially available for civilian use, but its cost and size has limited it to professional applications.
>The military has better precision, only because they buy better hardware.
That's not true. Military receivers can use the Y Code which is encrypted for military use only. There are some really fancy civilian receivers out there that can leverage Y code to a limited extent without knowing the encryption key but full accuracy is only possible on a military receiver that can decrypt and use the Y code.
By the way, has anybody tried to crack P(Y) code using a kind of "rainbow table"? (I'm not advocating that ofc, just curious...) I mean, it gets it's security because the parameters to its generating function are not known. But the function repeats after about a week, and presumely can't be changed due to the large numbers of legacy devices in the field. So you could just record the whole signal for a week (I did a back of the envelope calculation some time ago and it would be about a TB IIRC). Then just build some kind of index, and after recording a few seconds, you can find your position in the code, and syncronize to it.
Also, I've been wondering, how effective the old scheme with a secret military and a public code was - certainly an old GPS reciever shows up sometimes at a military surplus store. I wouldn't be surprized if enemy nations bought one of those, and extracted the secret parameters from the now quite old chips using modern technology.
P code itself has a period of a week but the W code that scrambles it doesn't have a known period AFAIK.
Semi-codeless receivers can use it anyways. The reason is that the W code has a much lower data-rate (500KHz I think).
Take the incoming signal, correlate with the aligned P code. The resulting signal will be a MHZ wide signal, bandpass to it, and square the result. The squaring wipes off the residual W code. Because you bandpass filtered before the squaring you don't get so much squaring loss. Lock the resulting carrier, and use RTK signals and long observations to resolve integer ambiguities.
For a dual frequency rx you can exploit that P(Y) is the same on both frequencies and correlate them against each other to find the ionospheric delays.
What's really interesting is that there are some commercial receivers out there that will use P(Y) Code even though they can't decrypt it. IIRC the weakness was something along the lines of the lower rate W code made it so that while you couldn't decrypt it you could still use it as a reference to increase the accuracy of the C/A code.
E.G.
P code 1001011010010110101010100010101
W code 00001111000000001111111100001111
And since it's modulated with the W code this would leak information. Not really sure how off base that part is, that's just a rough explanation I was told many years back.
Though I'm honestly surprised I've "known" of GPS for this long and never realized there were a variety of codes. Legacy--some still operating, modern, military, and more. I'm going to enjoy binge reading this wiki/math stuff tomorrow.
Yeah. As I understand it, previously it wasn't possible to do dual-band GPS without using the encrypted military P(Y)-code because the unencrypted C/A code was only on the L1 frequency, and for best accuracy you need dual-band to measure and compensate for ionospheric delay.
is there any reason to be even more accurate than a 5 meter radius? I mean, if you are targeting a person in a building, i can't see why it has to be more accurate than that. Navigation of a drone that is less than 5 meters in size may require higher accuracy, i suppose...
The chip doesn't just improve maximum accuracy, it also vastly reduces the deviation when in poor conditions such as inside a building. The technique to do so just happens to also make peak operating conditions more accurate.
Self-driving vehicles. You can buy GPS-based automatic steering systems for tractors right now [1] (there's still an operator on board, so it's not truly autonomous)
Obviously the self-driving cars of the future will also need other sensors as GPS doesn't work under tunnels and bridges.
Airplanes are already using the higher-accuracy GPS+GNSS to land safely on the runway in low-visibility conditions, guided in by the Flight Management Computer. It makes a big difference whether the plane is 5 meters above the runway or zero meters above the runway.
A problem that I usually have with Google Maps is that, when I am driving in (or next to) an elevated highway, the GPS usually does not know if I am in the highway or in the road beneath it.
Knowing the lane of the highway you are may be useful as well.
Determining lanes for map navigation is a big problem. I had a buddy that was responsible for mass export of Verizon telemetry data from Verizon to a third company. Their "holy grail" was to be able to determine lanes through triangulation in a timely manner (2008-2009ish). Don't know if the problem ever got solved and it would have only been relevant in dense metropolitan areas.
Guess I'm being downvoted for snark. But the definition of technology is to advance the state of the art. None of us would be here if we thought everything was already good enough.
iirc the requirements are an AND condition for speed and altitude. >1,000 knots AND >18,000m. But many civilian receivers implement it as an OR condition much to the annoyance of the near space balloon community.
Some will even switch of the radio once one of these limits are reached and not enable it until it's been reset so make them a pain to use for even if you just wanted the location data after your payload comes back down within the limits.
EDIT: But yeah the so-called COCOM limits still apply to civilian equipment (esp to a company such as Broadcom). Unsure on the regulations for modifying the firmware on a device that doesn't put these lockouts at the silicon layer. For example: https://github.com/swift-nav/libswiftnav/blob/master/src/pvt...
Leave it to stackexchange to shed some more light on the matter[0].
The original limits appear to be for units available for export, but I would guess manufacturers didn't want to deal with the headache of producing different models for import/export and just slapped the restriction down on everything. The poster does claim the original restriction was indeed an OR[1].
And current restrictions may only be 600 m/s? [2]pg 58
"Immediate access to satellite measurements and navigation results is
disabled when the receiver’s velocity is computed to be greater than
1000 knots, or its altitude is computed to be above 18,000 meters. The
receiver continuously resets until the COCOM situation is cleared."
Look in the right places and you'll find updated firmware images for various receivers which remove the limits.
That kind of limit is implemented in firmware (all the math is done in software), and they aren't generally protected beyond being an obscure architecture. When you've decided whatever vliw dsp architecture they use, finding a simple 'if' condition near the output logic is quite easy.
Selective Availability was turned off in 2000, prior to that error was about 50m. Differential GPS could work around it anyway, so there wasn't a lot of point to keeping it on.
There's usually restrictions that kick in at certain altitudes and/or hardware restrictions if you're going over a certain speed.
Depending on the source of signals used, the military can also just change the precision for civilian bands. But in times of peace / low threat levels, there's not a particular need to do so.
Copenhagen Suborbitals has a blog-post about those limits:
By default all GPS units have some built-in limits (so-called COCOM
limits) that prevents them from providing any data if the velocity
exceeds 515 m/s at altitudes above 18 km. These artificial limits are
built into GPS receivers to prevent bad guys from using them in
missiles and other nasty stuff.
More like parties that are not technologically advanced nation states. Russia has its own satellites, China will find 10 ways around the problem, but a few random dudes won't be able to DIY a cruise missile with cheap parts that would be precise enough to be dangerous.
Actually civilian GPS would work fine in low-cost cruise missiles because they fly relatively low and slow. The restrictions are to protect against ballistic missiles.
That's really cool! I'd like to build something like that One Of These Days, when I have more free time.
Also gives you the point - if one guy can build a receiver on his own... I mean, I don't want to make it easy for rogue nations and terrorist groups to get ICBM-mounted nuclear weapons, but with the difficulty of building and testing the missiles and bombs, it seems a little silly to expect restricting GPS receivers to accomplish anything.
There is no general relativity involved in calculating your position. Solving for your position is basic geometry, if you want to have better resolution, have good model for the effects of the atmosphere on the radio waves.
In "Exploring Black Holes" by Taylor & Wheeler, project A is titled 'Global Positioning System', where they say locating your position accurately depends crucially on General Relativity.
Before the satellites are launched, their atomic clocks have to be tuned to offset for time dilation.
GPS satellites are all tuned this way, and it has the desired effect.
However, for people implementing GPS receivers, the signal arrives already corrected for time dilation. So the people making that part of the system don't have to know about general relativity at all :)
Without correction the drift due to combined coarse relativistic effects would be on the order of 100,000 nanoseconds per DAY.
In fact the first NTS-2 satellite was launched with the clock unadjusted for GR, due to doubters, but after three weeks the syntheziser had to be activated to compensate. The calculated uncorrected drift on that clock alone was 38,000 nanoseconds per day.
As the document notes, to further refine Navstar to greater levels of precision would require tackling yet another set of tougher relativistic effects, plus more mundane atmospheric influences.
No FPGAs, RPis, or SDRs --- it's mostly done in discrete logic with a 68000 doing the back-end processing. As a bonus, he also implemented GLONASS. Furthermore, the software takes up less than 32KB of ROM.
Note that while well explained, this is a very 90s way of implementing a GPS receiver. As I understand it modern receivers use FFT-based correlators that have much better cold-start performance than the old-fashioned design because they can test very large numbers of possible correlations at once (generally hundreds of thousands of them).
Anecdotically, some chinese GPS chips for cell phones don't have those restrictions, but they maybe have others. The restrictions have become dubious anyway, as most nations who have that kind of rocket have the facilities to build GPS receivers (OK, I don't know about NK...).
Good luck. You're right but implementing a GPS receiver using a SDR is quite a feat. It's sort of been done but not in a way that would make it practical to use for e.g. a hypothetical terrorist cruise missile.
I assume you're joking. Nevertheless, for missiles, it's more about detecting and correcting early when the error is still manageable. If you go at Mach something, a few meters off course can quickly become a few hundred.
I prefer results to be as close to accurate as possible. I use a dedicated device, not a phone, when I put out game cameras or traps. The more accurate the results, the quicker they are to check.
I guess I'd reverse the question. Why wouldn't you want the greatest level of accuracy you can get? Maybe you want to track your property lines? Maybe you want to accurately mark where your access point is for your septic tank? Maybe you just want to hide Easter eggs or buried treasure?
I'm not sure why anyone would want a tool that is less accurate than it can be, assuming it is also reasonably affordable.
For locating things accurately? Imagine I found a fossil in a rock field and wanted to share its location with you - you'd want more than ±5m accuracy to find it.
As an avid hiker and runner I'm more excited to see what this will do for things like Garmin smart watches. I run through downtown Boston a few times a week and the accuracy is very bad at times making you look like you run much farther than you do.
That's my point, even if you run at a steady pace the instant pace reported by your device tends to jump around quite a bit, even if they smooth it out over 10s. On my Garmin watch, the instant pace typically jumps around by +-20s/mile, which is significant when you're trying to hit a specific pace target.
They don’t in real running. After seeing his page I got super excited. They’re really accurate at the pace you csalibrate them too, then drift quickly once that pace changes. So if you camlivrat them at 7:30min/mile, then run 8min/mile they’re pretty far off. Especially if you’re a distance running, or looking for accurate pacing. I’ve tested it both on marathons and tracks.
All current running watches and phone software uses single point calibrations. Now if they curved fit 2 or 3 points....
Stryd is a calibration-free technology. Motion capture technology is used to precisely track the movement of the foot at different running speeds & on different terrains.
Do you run the same path frequently? When I do, I keep the path the same, as I pre-calculate the distance, and can keep a watch to track my time. Thus I have no need for path tracking.
A GPS unit is useful for assisting dynamically changing routes (e.g. driving) or to track unknown paths (e.g. plotting your hike in snow/darkness). May I ask what you'd need a fine-accuracy GPS unit for during a run?
For runs it's mostly just for fun. I use Strava so it has sections they identify along your run that they rank you against your friends and strangers. I do end up running very similar routes most of the time.
I mostly find the GPS invaluable in hiking. It can be a great thing to have on the top of a mountain in the winter with low visibility. If you get up to a summit and the weather turns, you can have the watch send you back down the way you came. Nice and safe.
The only deficiency I find is accuracy when you are trying to workout at a specific pace on track.
On a longer run they are probably accurate to within 1% which makes practically no difference but you often find on a track that the inaccuracy multiplies on each lap so if you are doing say 1 mile repeats (4 laps) then by the end of the mile you can be 20 metres out.
It's not a biggie because visually you can see if your watch says 0.23, 0.46 etc as you go past the 402m mark you know that you need to up your pace because it's short but having a more precise distance and therefore pace would save a few brain cells when you're trying to concentrate on hitting your paces.
I try to run different paths as much as possible. My Garmin Forerunner fitness tracker is reasonably accurate at recording the path and displaying an instantaneous pace, but it still jumps around and gives erratic results at times. So better accuracy would certainly be a welcome improvement.
I've run into a severe issue with Fitbit's app on the iPhone 7.
On my older iPhone, the mile counts were reasonably accurate. On the 7, there's a spurious ~2 miles per hour added at all times.
My guess is that the 7's chip updates position much more frequently, so that small inaccuracies (a couple dozen feet west, then a couple dozen feet east on the next update) make it look like I'm sprinting all over the place in random directions at high speed. No idea why Fitbit doesn't smooth it out a little...
Did a six hour, 4.6 mile hike in the Adirondacks and it tracked me at 18 miles (edit: 13, sorry).
The Fitbit app lets you take a GPS track using the phone's GPS (as the Charge HR doesn't have a GPS chip). It's not step-based.
Standing still for a break still makes my mileage go up, and I can see the blue dot on the map in the Fitbit app jitter around a little - a few feet here, a few feet there.
(That's a ~5 mile hike in actual mileage - and you can see the issue very clearly between "miles" 4 and 5 on the map, where we likely took a long break, and in the longer distance between the downhill miles as we were moving faster)
With the existing L1 signal the multipath signals overlap each other and its hard to find out which the original signal was.
The newer L5 signal is much shorter, so multipath signals will show up as individual peaks, and the receiver can simply pick the first signal, cause that will be the direct one.
You can see how off the trace can be here: https://i.imgur.com/eHJI8FH.jpg . You could probably do some smoothing to help fit some of this to a street but I've been on runs where I zig zag down streets just to add distance or to do hills.
Wow, that's pretty horrible, must be giving you close to twice the actual distance. I think Google Maps on my phone applies some smoothing that does this a lot better, but then it can lag for a while when I'm going sharp right in an intersection.
Maybe try telling it that you're biking instead, my guess is they do more smoothing the higher speed they expect.
It's pretty much raw GPS if you're in walking or biking mode. Driving mode uses sophisticated filtering algorithms, and doesn't actually make any assumptions about your speed, though it does assume you're on a road.
I have a question for you. If GPS locations in cities are affected by reflections off buildings, wouldn't the effect be deterministic and could you take advantage of that to reduce the noise?
What makes anyone confident in the accuracy of wearable tracking devices? I haven't seen it verified and I wouldn't assume it: Accuracy is expensive, generally speaking, and few consumers will pay for it or even question it - how often have you heard someone mention it?
Or, for example, have you wondered how accurate your simple bathroom scale is? I looked into it a little, wanting to collect accurate health data: IIRC +/- 0.1 kg (~0.2 lbs) is available in <$100 scales, but for real precision you need the $500 scales at your doctor's office. And how consistent is it? I tested mine, a decent one with good reviews, and getting it to report the same weight in immediately consecutive measurements was a challenge; I had to stand on it in just the right way.
Here are a couple articles that found the accuracy of activity trackers wanting. I've seen other articles in places like ZDnet that found the same problems, though based on less rigorous research.
I'm not sure what the value of a more accurate scale would be for a bathroom scale - your weight varies by far more than 100g just based on what you've eaten and how much water you've retained.
A guess: Two uncertain data points makes it even worse. If you can at least depend on the scale, the error is lower.
I tested our old Nintendo wii (or whatever it's called) balance board and so far it gives the same value every time I step on it. The value was also only 0.2 kg off from the doctors expensive scale.
The > at the end of your sporttechie link is making it 404, but here's the important part:
“What the plaintiffs’ attorneys call a “study” is biased, baseless, and nothing more than an attempt to extract a payout from Fitbit. It lacks scientific rigor and is the product of flawed methodology. It was paid for by plaintiffs’ lawyers who are suing Fitbit, and was conducted with a consumer-grade electrocardiogram – not a true clinical device, as implied by the plaintiffs’ lawyers. Furthermore, there is no evidence the device used in the purported “study” was tested for accuracy.”
Fitbit’s research team rigorously researched and developed the technology for three years prior to introducing it to market and continues to conduct extensive internal studies to test the features of our products. Fitbit Charge HR is the #1 selling fitness tracker on the market, and is embraced by millions of consumers around the globe.
Consumer Reports independently tested the heart rate accuracy of the Charge HR and Surge after the initial lawsuit was filed in January and gave both products an “excellent” rating. We stand behind our heart-rate monitoring technology and all our products, and continue to believe the plaintiffs’ allegations do not have any merit. We are vigorously defending against these claims, and will resist any attempts by the plaintiffs’ lawyers to leverage a settlement with misleading tactics and false claims of scientific evidence.
Good point (and sorry for the link, which I'm too late to fix). However, if we take Fitbit's statement at face value, we should infer that Fitbit is as biased and unreliable as the other side, so I wouldn't write off that research.
Accuracy vs Precision. So they can pin your position to within 30 centimeters, but how small of changes can be detected? Can they detect relative movement on the order of 1cm or less? that would be really useful for some things. For example if you wanted to measure the distance between 2 points on the ground, you don't care about offsets but do care about precision.
You mean absolute vs. relative measurement. Accuracy vs. precision is a different concept.
It is possible that a setup with high-precision absolute measurements can produce high-accuracy relative measurements, but the former does not imply the latter (imagine, for instance, any source of error in absolute measurements that's locally non-linear).
>Can they detect relative movement on the order of 1cm or less?
The short answer is no, largely due to localised ionospheric conditions. The longer answer is yes, GNSS systems can be enhanced in accuracy using RTK [0][1]. This allows for precision on the order of 1cm, but requires a (rather expensive) fixed position base station to provide corrections. Surveyors, precision ag, and other large machine control (think dozers, graders, etc) have used this technique for precision positioning for the past decade or so.
I do recall reading that upcoming GPS satellites add a second civilian frequency, which I believe will enable these sort of ionospheric corrections to be done more easily and inexpensively.
I don't know quite enough about the technical details to be sure, but RTK can defintiely go over IP networks with NTRIP and other protocols. The biggest limitation with RTK is that the corrections are limited to 10-15 miles of the base/reference station (more is possible, generally with less accuracy), since ionospheric conditions vary enough to make the corrections highly location dependant.
The antenna systems on RTK rovers (the devices receiving corrections, which also receive the GPS satellite signals) aren't huge, but I'm not sure if they could be reduced in size enough to fit in a smartphone.
The technology in good receivers like the Piksi is slowly drifting into consumer-level hardware. You need to correct for such errors as:
1. Jitter in the code phase vs. phase-locked-loop. As the article describes, most existing codes are long and slow, with bandwidths on the order of 300 m (specifically, 1.023 MHz). The receiver can measure the exact time at which this code changes to get your existing resolution, but the L5/E5 bands are at 20 MHz.
2. Integer offsets when interpreting the carrier frequency. The above slow codes are transmitted on a carrier at 1-2 GHz, giving much higher resolution, but there's no information in the signal as to which particular node of the carrier frequency you're observing. Comparing to a known location, and improving your guess over time, can let you make use of this information.
3. Unknown, slowly drifting ionospheric delay. Typically fixed in industrial or agricultural applications by using a base station and radio link to tell your remote link that the base station (which is bolted to a big chunk of concrete) is now reporting that it's 20 cm from where it was an hour ago, and the remote unit should probably just adjust any measurements by that much.
I only have enough knowledge of the system to be dangerous, but I've wondered whether it would be possible to correct for #3 at a consumer level with a phone app. If you had thousands of phones cooperating in a city, at any given time many would be stationary, even charging or on wifi, and you could theoretically trade off roles as reference base stations and remote receivers. I think it would require a lot more low-level access to the GPS chip than generic Apple/Android phones give you, but it's an idea - feel free to take it and run if you like it.
To answer the basic question - common NMEA protocol returns GPS data with latitude formatted as DDMM.MMMMM (Degrees, minutes, and decimal minutes) and longitude in DDDMM.MMMMM format. Four digits of precision get you a precision of about 1.6 meters, depending on where you are on the Earth, but that doesn't mean you have that level of accuracy.
Indeed, carrier phase and multi-frequency+constellation measurements are drifting into consumer hardware, but I can say that it's probably going to be a Good Long While before we full realize its benefits. The use of cell phones (and cheap cell phone antennas) is an active area of research, particularly for organizations aiming to make high-accuracy GNSS positioning widely accessible outside of specialty markets (i.e., agriculture and surveying). It's an interesting area to be in for sure:
The base station network density question @ac29 points out later is a compelling reason for some of this.
(Shill: We're hiring for firmware engineers right now, focused on a variety of different areas: embedded Linux, DSP basebands (C), and navigation algorithms (C++) design and implementation. See https://jobs.lever.co/swift-nav for more details or email jobs@swiftnav.com.)
Iono delay is typically corrected by using a dual-frequency receiver so that it can be calculated directly via difference in arrival times of the time pulse on each frequency. L2C can help here.
It's quite normal for words which might seem to have the same meaning in the common vernacular to have nuanced meanings within a domain. This doesn't just apply to science and engineering - consider law, medicine, religion, food service...really just about anything you can think of. And it's not just an English language phenomenon, either: two words that mean the same (exact, identical, similar) thing really implies there's a difference between the words.
And yeah, as a test engineer, metrology can be confusing, but that's because it's necessarily complex, not because of silly choices.
In seismic acquisition (when you acquire "pictures" of the subsurface, much before the drilling part) you need accurate positioning of your sources & sensors.
Geological mapping (actually all mapping) is easier and more accurate with high precision high accuracy position data. The more accurate your map the better your geological model etc.
Also drilling is done to extremely high precision these days.
Edit: Bare in mind the industry uses differtial GPS for some tasks - this can give mm/cm precision.
The other answers here are probably right about the use case here, but keep in mind that "oil/gas exploration" is often among the first uses of a new technology.
There's a ton of money in oil/gas, and any technology that can give a slight edge is often worth it. For example, oil and gas companies were some of the first to use Iridium (worldwide satellite internet at ridiculous prices), and I'd bet that they are the biggest users of the Iridium network to this day. This goes for inventions in tons of fields.
For example, new breakthrough in diving: great, you can use that to work on underwater parts of drilling rigs. New breakthrough in robotics? Great, you can replace the expensive and dangerous dive crews with underwater ROVs. New breakthrough in geology? Cool, you can use it to find oil.
Seismic exploration of new oil fields requires both accurate geographic and, maybe more importantly, accurate time synchronization. If you place a ton of sensors in the ground and then pound the ground or set of explosives you need to know exactly where and when the echoes of the sound waves were picked up.
I'm pretty sure this is it. They have a drill that is essentially an anchor and the rest of the platform is essentially a floating quadcopter maintaining position.
I may just be a super-cynical ad-tech exile, but I see this as a path to getting shown more ridiculously well-targeted ads based on the places and stores I've been in. Beacons haven't taken off and I'm glad they haven't. I'd really prefer a GPS in my phone with a similar granularity to the one I have now. Google's notifications that I should review places I've been are creepy enough.
It's almost as though using a phone whose operating system is created and maintained by an advertising company isn't the best idea if you want to keep things like your location private.
For what it's worth, all operators have a legal obligation to gather these data (at least in the U.S. and Canada) officially for emergency services purposes.
Those emergency services are very convenient when it comes to flood warnings, tornado warnings, and even amber alerts. I am personally very glad to have them.
Apple is also an advertising company - they have unique IDs with your personal information pinned to them, and they sell ads based on your location, too.
You can change your advertising ID any time you want -- and almost none of their revenue comes from ad sales. In 2016, $190B of their $215B in sales was definitively not from ad sales, and the $25B services category that would include ad sales also includes the percentage of sales from the App Store, Apple Pay, Apple Care, iCloud.. etc. Ad sales are immaterial to their business model.
Immaterial would be a wrong word. Negligible might be better. Negligible right now would be perfect. Let's not be naive, they have a full advertisement department trying to monetize their platform and increase the revenue. They also tried to compete with Google for six years for the same, but failed in the in-app advertising. They are still trying to expand their advertising in other areas though:
And in any case, a company who makes 99.9% of their revenue off of non-ad sales is definitely would never be categorized as an advertising company. My grocery store is paid to put crest at eye-level, that doesn't make them an advertising company...
You're right, I wouldn't say your grocery store is an advertising company because they put Crest at eye level - I'd say they're an advertising company because they likely sell your loyalty card data to 3rd parties for $$$. Adtech is eating everyone's data.
IDFA is still going strong on Apple phones and is likely to continue into iOS 11.
There's already much finer granularity in stores, thanks to wifi-based tracking (even if you don't join the network [1]), so this ship has sailed.
Refining GPS accuracy only modestly improves the ability to track what stores you are near. Wifi-assisted GPS tracking that exists today probably tells advertisers everything they need to know already.
The toggle for that notification is in the Maps app under Hamburger > Settings > Notifications > Your Contributions. (If you think that's buried, you should see the developer menu.)
The difference between 5m accuracy and 30cm accuracy isn't terribly important if you're just trying to tell what building someone is in, and it won't improve in-building accuracy. I can't think of anything sketchy that it enables that someone couldn't do already.
If anyone works in the field, I'd be interested in how this dual signal (L1/L5) chip interacts with the RTK/CPGPS [1] (which improves accuracy down to 1-3cm already (at the cost of using multiple GPS units)).
RTK is commonly used in survey drones, and kits are available for about $1K.
RTK uses a fixed base station, ideally you know exactly where it is so that you know the exact location of your mobile unit (else you will at least get a pretty good relative position).
With the new chips, Broadcom, u-blox and others are bringing to market now you are no longer dependend on a base station to get cm-level accuracy.
Instead they exploit the different properties of the L1/L5 frequencies to infer stuff about e.g. the atmosphere, as the different frequencies are altered in different ways while travelling to earth[2]. The system can thus reduce its margins of errors[1] in the position calculations.
So they are not "interacting" with RTK in any way.
The chips are also much cheaper than your quoted $1K price tag.
Disclosure: I work for u-blox, but not an expert in GNSS calculations
Also, RTK requires tracking the carrier phase. And in turns this requires the receiver being kept on all the time, which is power hungry and just not acceptable for low-power / on battery applications.
Whereas the dual frequency approach can be used in the same way as "regular" GNSS, using all the low-power tricks to sleep as much as possible (with some accuracy vs. power trade-offs). Of course there will be a power consumption penalty vs. a single frequency receiver: the two RF chains, and the extra base-band processing. The later can be mitigated by better nodes (the article mention the chip being 28nm, so low dynamic power). The two RF chains impact of course can't be avoided. But for some application it may be worth it.
I guess my question was more specifically can these be used together for even better accuracy? e.g. if standard RTK on L1 is 1-3cm accurate, and L1/L5 is 30cm accurate, will using two of these new L1/L5 chips in an RTK configuration increase accuracy even more (since they are using different methods of error correction) or are there inherent limits to resolution? Appreciate any insight you might have, thanks.
15cm GPS accuracy has been available commercially since about 2002. Trimble and Novatel both sold units. But you had to get a subscription to a service which had a network of ground stations measuring propagation errors through the atmosphere. The correction signals came in from a geosync satellite. You also had to be able to see about five satellites.
More specifically you can use differential GPS for stuff like surveying with a reference receiver at a known location. Depending on how far away you are from the reference station you can get even better than 15 cm accuracy and just have a few inches of error.
That's more or less how this works. When you can see both the C/A and L5 signals, you can do some magic to help compensate for anything going on in the ionosphere
Down to 30cm - good! Now all we need is a GPS-enabled
app for urban pedestrians that reminds them to look
left or right just before they step off a curb
into oncoming traffic.
Actually, I was part of a team that built a GPS based walking navigation system for the blind (Google Microsoft Cities Unlocked). The limits of 5m accuracy were an incredible danger that prevented us from doing any u assisted trials.
Even highly trained and skilled cane users would wander into the roadway usually within 10-15 minutes of using the system, purely because of GPS drift. And forget about ever using it in urban centers. In London, Boston and NYC it was rare to be more accurate than 50-100m. It's scary how bad GPS accuracy really is.
or a running app that creates a route as you're running, so say you want to stop for a car so you turn an keep running in the new direction it could recalculate immediately and get you to one of your waypoints more quickly.
The siri/android problem doesn't seem to happen with the dedicated GPS nav systems I have. Maybe its a sample rate problem with phones (trying to save power), or just better smoothing, or relative plotting with the dedicated receivers. 5M is less than the distance between most highways and their frontage roads, so blaming the GPS signal itself for being unable to determine if your on the frontage road 15 seconds after you got on the highway is pretty lame.
I frequently run my ten year old TomTom, alongside my wifes iphone or my android on road trips. The tomtom is pretty much always dead on with respect to lanes/turning/etc, while the iphone/android regularly feeds us crap. That said, I would kill for google maps (pc browser version) style route planning on the tomtom where you can click various roads and get time estimates even if goggle doesn't think its a good route.
The vast majority of phones out there support GPS and GLONASS (as well as ESA's Galileo), and have for years. That's not new to the iPhone 8 (surely the iPhone has has this before as well, right?)
Using the other constellations helps you see more sats, but it doesn't really help deal with the reflection issue in urban environments. The "big deal" about this chip is that is uses the new L5 signal (and it's equivalent in other constellations). Previously that has only been available in very expensive hardware.
> In a city, the satellite’s signals reach the receiver both directly and by bouncing off of one or more buildings. The direct signal and any reflections arrive at slightly different times and if they overlap, they add up to form a sort of signal blob. The receiver is looking for the peak of that blob to fix the time of arrival.
> However, L5 signals are so brief that the reflections are unlikely to overlap with the direct signal. The receiver chip can simply ignore any signal after the first one it receives, which is the direct path.
Can someone explain this? Surely the first signal received will always be the direct signal, how could you receive signals from reflections first?
The key word here is "overlapping". I think the "old" L1 signal was long enough for the direct signal to arrive first, then have the reflected signal arrive while the direct signal was still being received. This would create the "signal blob".
So yes, the direct signal always arrives first, but gets messed up during receiving by the reflected signal. The L5 signal is so short, that the likelihood of reflections overlapping is reduced. Think: so short that the signal starts and finishes during the time it takes a radio wave to propagate 1 meter through air or so.
At no point it says it receives reflections first. It says that L1 signals are so long, direct and reflected signals are merged and received as a single signal.
I wonder if we'll see these in Microsoft's 2018 HoloLens. 1ft accuracy + spatial mapping = a global spatial mesh with countless commercial applications.
Does anyone know if we will be able to access the OS-NMA messages and manually verify them, or if the chip just discards any forged messages?
The reason why I ask: If we can get highly accurate and signed timing data it will have a huge impact on distributed systems. If the signature is accessible from the API it could be included in a DB transaction that every node could verify.
The current chip sets for enthusiasts cost ~600-1200$ http://www.rtklib.com/
I hope these come in decent packages to be usable for enthusiasts and are cheaper..
This is great, and will definitely cut down on collateral damage and allow for precision strikes when causing enemies of the state to become room temperature.
This seems like a great step. I had no idea the accuracy was only 5 meters right now. My phone feels like it is more accurate than (like 2 meters) that but I suppose that is because software does a good job at guessing where you actually are?
It's awesome that it is half the power of current chips too. Because I think smart phone users care more about battery life than accuracy.
Edit: Changed my messages because I reread the article and caught something I missed the first time. I missed the power usage statement the first time.
Yeah, the intro to the article made it pretty clear that this writer doesn't know the first thing about mobile technology. I think, in his mind, "Siri" is the name for mobile text to speech.
Didn't the US scramble the civilian signals specifically so it wouldn't be too accurate? And they're just okay with civilians having access to ultra accurate GPS receivers?