Sure. I pulled my data in the standard takeout.google.com/ process. The result is a json (? iirc). I parsed the json into x, y, and rasterized it using gdal.
Is that something Google's not doing with the data, but could? (E.g. they don't because their maps don't show most houses precisely enough or whatever, so it wouldn't be useful?)
Or is it relying on the fact that you are triangulating or similar from the known exact position of your WiFi routers or similar down to the inch, and Google doesn't have any way of knowing that?
Or they have decided that it's too creepy to use at all, so they don't use it for targeted advertising. Seriously, why does everyone assume that companies are evilly cackling in volcano lairs? They know that violating user trust is really expensive and a bad idea.
By the way, I'm pretty sure I've seen that Google's advertising targeting is only allowed to use "neighborhood level" location, which is designed to be coarse enough to not allow specifying individual people.
On the other hand, if the information is aggregated to a final answer, why is the data then kept? What if the _wrong people_ get ahold of the more sensitive information _because_ the data was kept beyond its useful life?
In their defense: it is perfectly fine with me to keep my location data, so I can download it later and do cool and/or useful things with it as long as
- it is opt in,
- it can be deleted by me
- is not given to anyone else
For all my trashing of Google lately (check my comment history) I actually expect and belive them to defend my raw data in a way that few others are able to. It all boils down to incentives:
- as long as they keep the data between them and me they can sell targeted ads again and again. If the data leaks then others can skip the middle man.
- as long as they keep their reputation as nice guys that is an immense advantage.
Now this might of course be changing, so everyone should consider if they personally trust this arrangement going for the future:
- it seems some part of the organization is tightening the screws around the Chrome team to squeeze out more revenue.
- of the data is available there is always the risk of attacks both cyber attacks as well as legal attacks.
Describing Google's data collection practices as "opt in" is a bit generous.
>In going through a set of privacy popups put out in May by Facebook, Google, and Microsoft, the researchers found that the first two especially feature “dark patterns, techniques and features of interface design mean to manipulate users…used to nudge users towards privacy intrusive options.”
Location history is one of the areas where Google has employed dark patterns.
For example:
>Ways that Google tricks users into sharing location
Android users are pushed through a variety of techniques:
Deceptive click-flow: The click-flow when setting up an Android device pushes users into enabling “Location History” without being aware of it.
Hidden default settings: When setting up a Google account, the Web & App activity settings are hidden behind extra clicks and enabled by default..
Misleading and unbalanced information: Users are not given sufficient information when presented with choices, and are misled about what data is collected and how it is used. Information about location data being used for advertising, for example, is hidden away behind extra clicks.
Repeated nudging: Users are repeatedly asked to turn on “Location History” when using different Google services even if they decided against this feature when setting up their phone.
Bundling of services and lack of granular choices: If the user wants features such as Google Assistant and photos sorted by location, Google turns on invasive location tracking.
More alarmingly, when users attempted to turn off location tracking:
>In a wonderfully clear example of “dark patterns” designed to mislead users and retain control over their data, Google continues tracking your location even when you turn off Location History and are told that “the places you go are no longer stored.” Google says it tells users, but its disclosure is the bare minimum and users are discouraged from further interference with data collection.
This is the main reason I trust Google with my data compared to other companies the data they have on me is their biggest competitive advantage no way they are selling it to anyone.
Genuinely curious, could you provide an example of Google "doing things they swore they never would" with consumer data? Because I know they do plenty of things with data that people think are creepy, but I don't recall ever seeing a story about them doing things they swore they wouldn't (besides the nebulous "don't be evil") or even lying about what they were actually doing with consumer data.
If it's happened time and time again, it should be easy to pull up a source, right?
>Google Has Quietly Dropped Ban on Personally Identifiable Web Tracking
When Google bought the advertising network DoubleClick in 2007, Google founder Sergey Brin said that privacy would be the company’s “number one priority when we contemplate new kinds of advertising products.”
Random example: when Chrome logged you in the browser without any warning, because you logged in to Gmail it also swiftly synced your local data (like browsing history) to your Google profile. That’s akin to stealing your data and I think at some point they must have promised not to do it...
>Google has been accused of breaking promises to patients, after the company announced it would be moving a healthcare-focused subsidiary, DeepMind Health, into the main arm of the organisation.
The restructure, critics argue, breaks a pledge DeepMind made when it started working with the NHS that “data will never be connected to Google accounts or services”.
They claimed multiple times publicly that they weren't scanning emails of students of schools that forced students to use Google email and Chromebooks, when in fact they were using them to build ad profiles.
"While the allegations by the plaintiffs are explosive, it’s the sworn declarations of Google representatives in response to their claims that have truly raised the eyebrows of observers and privacy experts. Contrary to the company’s earlier public statements, Google representatives acknowledged in a September motion to dismiss the plaintiffs’ request for class certification that the company’s consumer-privacy policy applies to Apps for Education users. Thus, Google argues, it has students’ (and other Apps for Education users’) consent to scan and process their emails."
"In November, Kyle C. Wong, a lawyer representing Google, also argued in a formal declaration submitted to the court in opposition to the plaintiffs’ motion for class certification that the company’s data-mining practices are widely known, and that the plaintiffs’ complaints that the scanning and processing of their emails was done secretly are thus invalid. Mr. Wong cited extensive media coverage about Google’s data mining of Gmail consumer users’
>Mr. Wong’s inclusion of the following reference to the disclosure provided to students at the University of Alaska particularly caught the attention of privacy advocates: The University of Alaska (“UA”) has a “Google Mail FAQs,” which asks, “I hear that Google reads my email. Is this true?” The answer states, “They do not ‘read’ your email per se. For use in targeted advertising on their other sites, if your email is not encrypted, software (not a person) does scan your email and compile keywords for advertising. For example, if the software looks at 100 emails and identifies the word ‘Doritos’ or ‘camping’ 50 times, they will use that data for advertising on their other sites.” “The fact that Google put this in their declaration means we take it as true,” said Ms. Barnes of the privacy watchdog group EPIC. Google’s sworn court statements reveal that the company has violated student trust by using students’ education records for profit.”
Their only goal is the make money - by definition. It is not that people assume that they are evil - just that they will follow on things that will earn them money.
My new startup uses SDKs embedded into popular apps that make ultrasonic clicks and use sonar-like reflections to estimate the length of toilet paper remaining on the roll (using AI, machine learning, and blockchain, obviously). /s
Perhaps this level of location resolution is not stable enough at Google scale to present it?
AKA: better to show reliable fuzzy information than unreliable precise information.
Considering that google has a history of cloaking information via the UI (see purchase history hidden if you have G Suite, but still fully accessible via takeout), and that Google offers advertisers the ability to see if you have visited a particular store even in an indoor mall, I am sure google knows your location more precise than it reports.
This is pretty cool, but I'm still confused on what data you used.
By my data, do you mean data from Google Android Device Configuration Service?
If you're logged into Chrome or GSuite tools from desktop locations, I just wonder how useful the data from those other products would be, if it even has location data.
I'm downloading my data archive to check it out...
Yeah, this isn't the last season of Silicon Valley, you've explained it really well. Theoretically it's very dangerous vector of attack, however, in practice seems impossible seeing how big Bitcoin is over all.
I think author has an incredibly clear style of describing things and I enjoyed reading this a lot. It's also worth noticing that he stated from the beginning that he's no expert about Asia and is simply passing on his impressions.