Hacker News new | past | comments | ask | show | jobs | submit login

This looks to use the dev options to fake it, which I believe bypasses the geolocation apps (as I assume the mock location is used for testing apps if they are in certain locations).

That being said, I have tried this for banking apps, and they aren't fooled by it, so I am guessing Android passes on that this is a "mock" location, not a real one.

Like you said, if you really want to fake it, probably a faraday cage/fake GPS would be necessary.




Yes, this is the case. I cannot remember the details, but the OS makes applications aware that location mocking is turned on.


You can just call .isMock() on the location object you receive: https://developer.android.com/reference/android/location/Loc... - without root that can't be bypassed.


That's a pretty anti-user feature if you ask me.

Especially for a device so personal as a smartphone.

There needs to be legislation that prevents manufactures from overridinf the will of the user at the behest of app makers for devices like this.


Having a smartphone provide a hard to fake location is a pretty valuable feature. A lot of businesses depend on the fact that location data is hard to fake.

Consider caller ID - legislators around the world are working on making it harder to spoof your identity, because there's so much fraud going on with fake caller IDs.

It's the same with location. Being able to easily fake location would open the door to so many frauds...


You're basically arguing for https://en.wikipedia.org/wiki/Trusted_Computing . You're saying that the manufacturer should have more power than the consumer, that the consumer cannot run arbitrary code, that the consumer cannot examine and disassemble the manufacturer's code.

Even if the device is unmodified, you can still spoof GPS signals by generating them in a box: https://www.reddit.com/r/electronics/comments/4unzp2/cheatin... , https://www.youtube.com/watch?v=9mC71c6zRUE . That's why I think "trusted computing" is pointless.


We basically already have that on smartphones. Both Android and iOS have remote attestation, and a significant number of apps use it to refuse to run on devices with anything but an unmodified first-party OS.

I was surprised there wasn't a bigger outcry over it in the tech world.


As someone who habitually roots my Android phones, I'm always somewhat annoyed when I can't use features like tap-to-pay, but I'm really annoyed when apps refuse to start, especially when they are for things like McDonalds. I shouldn't need to have a known-trusted operating system to buy a burger.


Be sure to give them 1-star reviews.

I've found that the Play Integrity Fix module for Magisk usually solves it, though there are a couple exceptions. They still earn a negative review for the attempt.


> I shouldn't need to have a known-trusted operating system to buy a burger.

That's for the app developer to decide, no?


Why should it be?

It's a recent change that app developers have the ability to know, and represents a massive transfer of power away from users to app developers and OS vendors.


Well in a purely laissez-faire sense, of course. They can legally decide to refuse service to anyone for any reason, with a few narrowly protected exceptions like race. But that doesn't mean they _should_. They could choose to refuse service to anyone who isn't wearing a tux, or anyone who refuses to sing the national anthem, but they shouldn't do that either, and not just for the obvious capitalist reason that those actions would cost them business.

I guess what I'm saying is that I see some degree of reasonableness in a bank or a mobile game enforcing some Trusted Computing paradigms, even if I don't like it. Banks have to worry about real money fraud, and games worry about cheating. In my opinion, the privacy and user agency tradeoffs are not worth it, but I see why they do it. For someone like McDonald's though, I just do not see any reason that they'd need this level of trust in their customers.

Buying fast food is historically a very low trust, transactional deal. Why does McD need to be able to ensure my device integrity to offer this? Starbucks doesn't need to do so, and they have a loyalty program with stored value and payment reload in the app.


Does the McD app have saved payment credentials?


I doubt it (if McDonalds' saves credentials, it's likely some sort of token on their servers, rather than in plaintext on the app), but that wouldn't change anything, as I am okay with running an app which saves payment credentials, on my rooted phone.

Indeed, I'm okay with doing whatever I want, within standards of human decency, with my owned device and my owned bits therein. I don't see where McDonalds' desires factor into what I do with either.

Their technical capability of imposing control over how people use their own devices isn't self-justified, or justified at all.


Just saying that if they do, maybe not you, but someone will eventually go "I saved my cards into the McD app and got a surprise $LARGE_AMOUNT bill" because of their mobile platform not enforcing the isolation.


I doubt it (if McDonalds' saves credentials, it's likely some sort of token on their servers, rather than in plaintext on the app),

But even in that remote possibility, I think it's even less likely that many folks sophisticated enough to root their phone would ever have that complaint.

I'm happy to be proven wrong with a sufficient amount of such complaints about the McDonald's app.


> They can legally decide to refuse service to anyone for any reason, with a few narrowly protected exceptions like race.

and because an online store is not able to discriminate on race, these protections are voided, because they could refuse based on proxies of race (for example, they could be using location to determine if you're likely going to cheat the delivery, or reverse credit charge, or fraud etc). it might sound reasonable to try prevent the fraud before it happens, but it is an abuse of a position/power they should not have.

The balance of power between a user and a service provider on the digital realm is swining towards the service provider. This needs to be addressed.


There is also the base band processor, which is completely locked away from user access. Running a wireless network would be much harder, if users could access it. So I guess freedom and working tech are trade-offs in this case.


That the client isn't trustworthy is a pretty fundamental rule of network security. Attempts to circumvent that rule are making it so users can't trust their own devices, and that's a dark path to go down.


So how do you stop Google or Samsung from using your location data without your consent?

- Not using GPS? Not an option because you need it

- Disabling permissions? Not possible for "system apps"

- Having the 10% privacy aware people block location somehow (via rooted phone or different distribution)? That doesn't help the other 90%.

IMO the only solution is to poison the data with fake locations.

Are there other options I missed?


> A lot of businesses depend...

Do I care? Like not to be glib but as an end user buying a phone for my personal uses, I dont care about their businesses and I loathe the idea that their business model requires such an anti feature to be widely deployed in personal devices such as smart phones.

Tell you what. I have a business model that requires your personal location data. Be a dear and send it to me.

And again, why do I care about caller ID. It's been trash for years. I just never answer calls and use diffetent platforms such as Signal to communicate with my friends.

It may open the door to so many frauds, but it opens the door to so many more abuses.

People will talk about these 'features' differently the first time a large genocidal action takes place that makes use of this data.


Ride hailing apps rely on the fact that both customers and drivers phones don't lie about their location.

Mapping companies rely on the fact that their crowdsourced data is reliable.

Emergency services rely on the fact that phones share accurate locations.

Delivery companies require authentic location data from their agents.

Apps that allow people to rent scooters or bicycles rely on non-fake location data.

If you made it easy to provide fake location data, a lot of apps would suddenly have to deal with a whole new class of fraud. I just don't see how this would be a net beneficial change.


You can separate all of those examples into one of two categories: one in which the owner of the phone has a vested interest in providing accurate location and an easy means to enforce consequences for undesired behaviour (I want my ride to find me, I want emergency services to help me, I want to find a bike near me), and another in which a company hopes to use someone else's information for free for their own benefit: delivery companies with a "bring your own phone" policy, or crowdsourcing companies hoovering up free data to build their value.

Since I paid for my phone, I really don't see why it's incumbent on me to help the latter case out. A delivery company is free to supply a managed device or fit a device to their trucks. A crowdsourcing company should in any case already be assuming all client data is suspect and ensuring it's properly correlated with other sources of information.

There's no fraud here. The worst case outcome in your list is perhaps a delivery agent just having their phone say they've done a particular route when they just sat in the movie theatre or whatever, but the fact that none of their packages got delivered is enough to out them even if the shipping company is trying to get location data for free from a device the shipping company doesn't even own.

If I hire a bike I'm nowhere near I still pay for it. Denial of service attacks in which the perpetrator is fully responsible for the cost of resources consumed just aren't a thing that happens.

The only group involved in these scenarios that's taking adverse action against another party for their own gain is the entitled companies demanding users' phones be locked down so they can continue to get material gain from them without compensation. The net benefit here is that these companies' profits should definitely be taking a back seat to users' rights.


I think you are stuck in an "us vs. them" mindset. It's not all consumers vs corporations.

Consumers actually want to use services offered by companies, and willingly accept what you consider drawbacks. We want accurate traffic data in maps, and crowd-sourced key finders, and all the other conveniences afforded by unspoofable location data, and we give up a little bit of control in exchange for that.

Companies don't compensate us with money for crowdsourced data; they compensate us by offering services they couldn't offer otherwise.


Crowdsourced data is not individually reliable and is already accounting for invalid and incorrect submissions. We can have our cake and eat it too.

The ‘us vs them’ mindset comes from ‘then’ determining what is acceptable for ‘us’ to do purely because it’s good for ‘them’.

If I want to set my location to somewhere I am not, and the only reason I can’t is corporate interests, then a battleground has been drawn up by those interests, not me.


What I am trying to say is that there is no strict difference between "user interests" and "corporate interests".

Companies in general don't want reliable data for the fun of it; they want it in order to provide services that users want.


> Apps that allow people to rent scooters or bicycles rely on non-fake location data.

There's a way to fix this. Each bicycle can store a private key, and your phone needs to talk to the bike nearby to do a live challenge-response before you can rent it out.


I'm pretty sure they already do that. As always, the standard network security advice is "don't trust the client", and yeah, it'd be nice to be able to trust the client, but it would also mean the total abandonment of any meaningful user control over their own devices, so it's not worth it IMO.


I fully agree with where you're coming from, but you kind of veered off with that last sentence. In general I think the threats from fine-grained surveillance databases are a lot more nuanced and pernicious than genocide.


> A lot of businesses depend on the fact that location data is hard to fake.

You spelled "spammers and personal data spies" wrong and it somehow ended up as "businesses"...


There are a lot of legitimate use cases that require reliable location data. I mentioned a few that I could think of in a sibling comment, but I'm sure there are more. Maybe you can come up with a use case for accurate location data yourself?

Anyway, spammers and data brokers probably wouldn't care at all if say 10% of people spoofed their location. They don't really have a lot to lose if some of their data is incorrect.


> a lot of legitimate use cases that require reliable location data.

if the use cases are aligned with the user, they will give the correct location data.


Users also need other users to be honest about their location. Consider a dating app; you want to meet real people who live in your area, not a fraudster pretending to live just a few blocks away.

Or a delivery driver: You want them to actually drive up to your house and ring your bell, rather than just pretend to drive there and drop your package somewhere else.

Location data is worth a lot more if it is reliable.

The user should be in control of sharing their location. But you shouldn't be able to just provide a fake location.


> Consider a dating app; you want to meet real people who live in your area, not a fraudster pretending to live just a few blocks away.

So you shouldn't be able to use the dating app to set up a date for when you're back home while on a business trip or holiday?

Maybe you should just upload proof of residence to the dating app instead. But I'm sure you'd consider THAT a violation of privacy, while 24/7 location tracking of your phone isn't because ... it's electronic?

By the way, do you want to give your exact location to a profile on a dating app? Even if they're local, maybe they're serial killers.

> Or a delivery driver: You want them to actually drive up to your house and ring your bell, rather than just pretend to drive there and drop your package somewhere else.

This other case is legitimate but it can be solved by issuing the driver a work device that has tracking.

Unfortunately the other 10000 cases are unneeded violations of privacy.


Quick question: how come every scumbag who calls my phone with a scam has a fake caller ID, but I shouldn’t? Again, this seems pretty user-hostile.


For the same reason that every movie ends up ripped on piracy sites, but you still can't watch Netflix in 4k on Firefox on Linux.

DRM doesn't work because it only takes one person to bypass it to make a copy, and caller ID verification doesn't work because it only takes one janky provider that doesn't implement SHAKEN/STIR correctly and yet is worth too much money to totally block.

FWIW I can still generate calls with arbitrary caller ID from a handful of my (legacy) ITSP providers, but if I get a new account today with any of them, they will require me to either verify each caller ID by receiving an inbound call or provide a "valid business justification" for why I can't do that. They are working on tightening up the pathways to generating fake caller IDs but in the telephony world, nothing moves fast and uptime is more important than anything, except maybe revenue, of which spam calls account for a ton.


These scumbags shouldn't be able to have a fake ID, which is exactly what legislators in the US and the EU are currently trying to end.


Well, if legislators are trying to fix it, I suppose I feel better about the user hostility. Good luck to the legislators, and thank god we have people like them!


Well, legislators managed to abolish roaming fees within the EU, so maybe they'll manage to fix caller ID too.


You sound like you might be from the EU. I have some bad news for you about US legislators.


Also: screenshots. Firefox won't allow me to screenshot a private window. It's my damn phone and I should be able to screenshot or record whatever the hell I want.


Go to settings -> private browsing and enable "allow screenshots in private browsing".


It's not just firefox (which, fortunately has a setting to turn off).

There are apps whose name(s) i won't mention, that uses DRM to prevent screenshotting, and it's way harder to bypass.


I think it might be a legal requirement for emergency services. I used to work a lot with VoIP and each line was required to have an address associated with it.


I'm fine with that, provided that there's sufficient oversight to prevent abuse.

What I'm not fine with is one large corporation who makes phones baking this feature in so that other companies that make apps can profit off it. That's two parties conspiring to fuck over their customers.

That needs to be regulated.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: