Google Now has done the same for me, told me how long it will take to get to a bar I frequent. My reaction was quite exactly "Oh that's neat, thanks!" and I went and had a great burger that night.
Totally OK for me Google. I respect that people have different privacy thresholds, but I think the fact that it's different for everyone is being lost in articles like this.
One tiny caveat being that, Google (and others as well, to be fair) will be able to indirectly collect data even on the privacy-aware part of the population who don't use Google much. The simplest example being, even if you don't use GMail, still some part of your emails inevitably end up in GMail inboxes. Now also consider this: just being a guest in a Google-stuffed house means you are under surveillance.
So no, it is not just my problem or your problem, it's everyone's.
Sure, and to illustrate your point, I have an email very similar to someone else's. I very frequently get their emails (invoices, church events, travel itineraries, purchase receipts). Google thinks they're _my_ trips, and updates me about flight times.
I think this example serves both our points. To your point, it's totally leaked this other person's info into my "google world" because I'm on gmail. On the other hand, that person is leaking his information directly to me just because of typos when he fills out online forms. Perfect privacy requires a lot of vigilance in a digital world, with or without google/gmail/hotmail/yahoo/etc.
> Sure, and to illustrate your point, I have an email very similar to someone else's. I very frequently get their emails (invoices, church events, travel itineraries, purchase receipts). Google thinks they're _my_ trips, and updates me about flight times.
Fun story there. Because Google's internal privacy safeguards are so strict, the people working on features like that can't look for example emails to train their ML models with.
They can only look at emails that were explicitly sent to them in order to improve the feature (and almost no one forwards along positive nor negative examples). What they can do across the email corpus is run jobs that return aggregate stats, where each stat must be coarse enough that it is infeasible to trace back to original users (often 100k+ users per data point).
So, AFAIK, training & testing models under these safeguards is more or less done blind. Build a model with the few examples you do have, and then run it against the corpus. If you see numbers change, you have no idea if that's good or bad, since you can't actually inspect the run.
(at least, this is the way it was a few years ago)
> and almost no one forwards along positive nor negative examples
One time, when I marked a bunch of incorrectly-classified emails as "not spam", the Gmail web UI asked me if I wanted to send these emails to the Gmail spam team. Was this what you meant, or something else?
Yeah, that sort of thing - by giving permission on a per email basis, they are then allowed to look at that particular email to see how/why the model is miss classifying it, AFAIK
(That's different than just marking an email as spam, though)
That's one of the interesting things about AI. There's no way to clairfy/correct when something is wrong, and most of the time you don't even know something is wrong.
It's not clear whether these AI models have much incentive to correct anything. If 99% of people with attributes x,y and z are bad candidates for a job, will you even get an interview? Is there any attempt to account for the fact that attribute x is something you were born with? Or that you are actually in the 1% and really are a good candidate? Or that you don't actually have attribute y, and it was just inferred from something else or some kind of mixup like an email address typo?
There are all kinds of interesting thought experiments. What happens when a classifier innocently discovers that the best classification is by race? Do we care? How about if we remove race but it happens to discover that four features which are very strongly correlated to race are the best way to classify?
If that second scenario were to happen, then I think we should take a serious look at why that correlation is occurring rather than just throwing out the data because it's "racist". That we removed the classification and then it was re-discovered by other correlations really should suggest something. On the assumption that it wasn't engineered to be biased and was naturally arrived at by the algorithm itself, then that actually seems like an important data point, and could even be a nice litmus test of how we're addressing racial differences if the models evolve to be more positive over time.
There are ways to account for that. A model can be fit to race, and then you only predict "on top" of race (meaning residuals). You use that model, which is independent of race.
> If 99% of people with attributes x,y and z are bad candidates for a job, will you even get an interview? Is there any attempt to account for the fact that attribute x is something you were born with?
Why would that matter to the company? People are born with stupidity.
I have a fairly uncommon name and predominately use Gmail. Still, I receive a surprising amount of other people's mail in error. Apparently lots of people guess at email addys. I do often wonder at both the lack of privacy this engenders and what the Goog machine must make of it all.
I definitely worry about my life being affected by situations like this.
I receive a ridiculous amount of other people's email - for serious things like email account reset, to banking info. When the Ashley Madison hack happened, my email address was there multiple times! Imagine if my then-partner had bothered to look, what a mess that would be.
> for serious things like email account reset, to banking info.
I've called American Express a couple of times to report errant emails with account info ending up in my email. THEY didn't care that one of their customers had an issue, they thought it was a great time to have me fork over MY info "to check".
My partner received a few emails which a government representative had tried to forward from their work account to their private Gmail account whilst getting the address wrong. These contained personal information relating to correspondence with constituents.
From time to time I have had messages relating to a government planning committee as one of the members got the domain of someone's departmental email account wrong and the messages come instead to the domain I administer.
I know someone who has his (common) first name @gmail.com as he was an early employee on Gmail, and he regularly gets grandparents guessing their grandson's name as the correct e-mail address. He told me this happens several times a day.
I just got a funny email last week... welcoming me to the NRA. I was like... but I didn't join the NRA. I sent them an email about it, but I'm probably going to have to CALL the NRA to tell them to remove my address from their database.
Getting these things corrected can actually be quite difficult. I regularly get emails from an optician in another state. I have replied multiple times that they are sending information to the wrong person, but they continue to do so. Fortunately this has never included detailed health information, but merely revealing a patient relationship with a specific medical provider can be a breach.
In another instance I was getting emails about an account someone created with American Express using my email address. I sent multiple emails to their customer support to get them to stop sending financial information to the wrong person with no results. In this I also found it difficult to even figure out what agency to report them to for failing to take action when notified. Eventually I took the time to call them. It took around 20 minutes (not counting time on hold) of talking to multiple people to get them to remove the email address from the account. This included them asking me several times for my social security number - which I flatly refused to provide since I had zero business relationship with them. This refusal actually seemed to confuse them.
The thing that gets me is when people seem to be guessing at their own emails when filling out forms and such. How do you not know your own email address?
Many of the people I know who have very common gmail addresses have set up canned replies to let senders know they've reached the wrong person.
> On the other hand, that person is leaking his information directly to me just because of typos when he fills out online forms.
I often get some misdirected emails because of a very banal name in my country.
When the email contains a thread history, I sometime noticed that the address was simply corrupted by a recipient, such as numbers getting dropped from the genuine address in their reply.
I guess some systems can't handle correctly numbers or other characters in email addresses.
Wonder if one should press for having these services accept a public key alongside the email address that they then are obliged to encrypt all outgoing emails with. Thus even if the address is wrong, the recipient can't easily read the content.
How does that solve anything? Either these services will have to publish the key on your behalf (so you can lookup the public key for bob@gmail.com with some public API), or you will have to provide the public key every time you hand out your email address.
The former doesn't fix the issue at all, and the latter is unworkable because the guy reliably giving out the wrong email address will absolutely not remember his public key.
Had the same thing, Google swore I was supposed to be booking into a hotel in Copenhagen, but most definitely wasn't me - interesting to see what happens are these predictive features become more prevalent.
This is often useful even if you're not the flier. When I'm picking someone up it lets me know when their flight was delayed, and when I'm traveling it lets my wife know when I'm available.
If you visit a place of business you are potentially under surveillance. I'm sure there is a distinction but I'm failing to come up with it right now.
I was somewhat radical about privacy in the late 90s (only person I knew that read every EULA) and am still a supporter of the EFF but I don't really understand the issue here.
I tend to agree, though I tried to do some searches for a law citation here, but struggled to find anything concrete. I imagine this will be a big area of research and exploration for law in the coming years.
Some interesting scenarios to consider: If I visit a friend's house, and I start getting targeted ads for a service I didn't subscribe to without prior consent or my knowledge, can I sue her/him? What about a scenario where some service collects my data, said service is hacked, and someone commits identity theft on me, who is liable for damages? Do I need my buddy to sign a waiver when he visits to play some Xbox for a bit?
Wasn't there a story recently about android phones still being tracked and tracking wifi hotspots even with wifi turned off? I believe location services still works pretty well even with GPS and wifi off.
I would imagine if you walk into a home with google's AI doodads all over the place, you're gonna be picked up.
Yes, there was recently a question of how Android knew someone's location with such high resolution, even when wifi and GPS were both turned off. The answer was a passive probing of wifi identifiers and using those against the Google SID to location database even when wifi was turned off.
There was even a note about this feature in the privacy policy, IIRC.
Perhaps people should be required to post a notice on their front door if their house has a Google surveillance device in it (or I suppose, verbally inform every visitor). It's common (and often required) for businesses to post notice that people on the premises are under video surveillance.
I have a couple of cameras, and at least one visitor has been uncomfortable with their presence, despite the fact they're not sending the data to a third party.
> If you visit a place of business you are potentially under surveillance. I'm sure there is a distinction but I'm failing to come up with it right now.
A business is typically located in a public space, where there is no expectation of privacy. In contrast, a home is by definition a private space where there is a strong expectation of privacy.
Exactly! In a public space, you can expect someone to be collecting data about you, even if it's only photographing you in the background. In a private space, you expect no one to be collecting data about you, except the people you are directly interacting with.
Aren't there some wiretapping laws around this sort of thing?
This seems a bit analogous to the treatment of attorney-client privilege (at least in the US.) If I have a discussion with my attorney in my house, or in my attorney's office, with no one else there, the conversation is generally privileged. If I have that conversation in the presence of a third party, the conversation is no longer privileged. So, if I'm in someone else's house, I may have implicitly consented to the fact that I don't control the environment and can no longer assume the same level of privacy.
A home may be a private space but it isn't <i>your</i> space. When entering someone else's space you open yourself up to having to abide by their rules, listen to their music, and be captured by their cameras.
Its one thing to be under surveillance by a grocery store's security cameras and another thing when those cameras are sending all your data to the same central organization. Right now we aren't that great at integrating all that data — knowing that I'll be in a certain bar on Sunday nights is a little disconcerting, but if I knew that was the limit of AI assistants' invasiveness I'd be embracing it for the convenience. But technology for AI and data processing will improve, and data gathering will become more pervasive and more centralized, and soon megacorps will have a much more complete profile of you that they can query if ever they or one of their employees develops malicious intent (not to mention the non-malicious actions that can cause harm).
There's privacy in decentralization but you have to recognize that for many of us, there's value in openness.
I'm very optimistic about Google making my life better in lots of little ways. I have virtually no concerns about my openness to Google causing me trouble.
I think that's a common goal and widespread belief and find its implication of having given up on "bigger ways" troubling.
Our industry was going to change the world. Rewrite the cultural rules of socializing. Rebuild the economy in a new form. Encourage major cultural and political shifts by empowering the little guy. Bring knowledge to the ignorant, companionship to the lonely, power to the powerless, jobs to the jobless.
What we have now is, well, maybe, with some luck, timidly, sometimes google can analyze the last time I went to a gas station, the number of miles I've driven at various speeds since then, the time of my next appointment, then it can adjust my google now card to encourage me to leave home earlier so I'll have time to fill up the tank and it'll find the cheapest advertised price along the way. That's nice ... but wheres my revolution?
Even worse in context of the article, OK well say I have to give up all privacy and go hard core 1984 telescreen big brother is always watching, to save endangered species and house the homeless and feed the starving and bring peace to the victimized. Well OK I'll think about it. Oh wait, we don't get any of that, all we're offered is slightly better appointment scheduling. Eh, no thanks.
Nothing is ever really new, and this era is likely similar to pre-quantum era Physics around 1890 where nothing is left to discover other than adding a few more decimal places here and there. The meme and speech patterns are all the same (although I'm not quite that old)
While it doesn't seem like that much of a revolution, things have changed significantly.
For centuries, people have navigated using stars and maps, yet now you have turn by turn navigation in your pocket with real time traffic and crowd sourced traffic incidents. You can reach most of the world population instantly by dialing a few numbers or even sending them a text message or mms, no matter where they are.
In the context of the article, a couple of years ago, Google Now told me I had to leave for a meetup in my calendar that I completely forgot about, gave me transit directions for a part of a city I've never been at and got me in exactly at the meetup start time.
While that all seems like modern conveniences nowadays, even thirty years ago if you'd have told people that you couldn't get lost anywhere in the world and could get ahold of just about anyone at anytime instantly, they'd think you'd be talking about science fiction, not today's reality.
Google has top notch practices in my opinion regarding privacy. If anything I would be more worried about smaller companies with shadier practices and lower security standards holding my personal information. There have been seemingly illegal practices from companies like Sears with how personal data is collected and used. It's easy to throw shade at a big companies, write a sarcastic title, then get clicks.
Example: "Intuit’s TurboTax stores highly detailed financial data for millions of users who import their W2s, their banking data, info about their mortgages and more. Right now, all of this data is locked into TurboTax, but the company is now thinking about how it can do more with it by giving its users the option to share this data with reputable third parties." ... https://techcrunch.com/2016/09/22/intuit-wants-to-turn-turbo...
> I would be more worried about smaller companies with shadier practices and lower security standards holding my personal information.
True, I don't mind Google today, but what about tomorrow? What about N years from now when they have failed to hit their financial targets 2 years running. Will that company have the same set of standards as the one today?
The reality is that once you've given up your privacy there is no getting it back.
For me the real problem is that large and trusted companies like Google are softening the public perception of what privacy should be and making it easier for smaller and more malicious companies to abuse the trust that Google and others generate.
Perhaps we need to start normalizing encrypted email. Just like https everywhere is no longer considered necessarily "tin-foil hat" SOP, moving this direction for email needs to be socially normalized.
Going further, given that an encrypted email to Gmail will simply be unencrypted and then available to GMail, include in the protocol authorized (via both white and black list means) agents of the recipient. So, if you are hosting your own email but the intended recipient is expected to be not hosting their own email, the sender can blacklist "agents" such as Gmail and Yahoo! Mail, or blacklist all except for those chosen to be white-listed such as Proton Mail.
I was thinking about this just yesterday. It would be great if email providers made tools to painlessly create encryption keys and get them synced up to all your devices securely. From there each email you send can include an automatic signature of where to find your public key. On the other side when you receive emails from others with such a signature you can then choose to have the email reader fetch the key and let you start trusting it with a single confirmation click. From then on your emails to that person will be encrypted without you having to think about it.
For almost all users getting a message from someone you personally know containing a message that they would legitimately send would be sufficient to trust the referenced key. If you are a little paranoid you could ask them over the phone or in person to send a specific message that you would then trust, or just ask for the URL itself. I believe that in-person key exchanges utilizing large trust networks are overkill for the vast majority of people sending everyday communications.
Make it so simple as 1) one time key creation and copying to each device 2) one time trust per other person that is completely streamlined by the software. If the gmail team implemented this for example other email providers would soon do the same and it would spread like wildfire. Very quickly a huge amount of email would be encrypted. Of course this would require an open design that anyone can implement.
Maybe there's a fundamental flaw with this idea that I'm not seeing. If so, please say so because otherwise I'm going to just be disappointed in five years when email is still 99% unencrypted.
I'm not sure you are any less responsible for your own privacy despite the fact that companies like Google as well as others are making it more challenging. The example you mentioned seems easily fixed by using GPG.
Granted there may be a place for regulations to help us restrict what companies are able to do(perhaps making it easier for you to identify a region that is being recorded, right?), but, at some point society can't help the fact that you'd prefer if machines were unaware of your existence. That's just something you have to solve for yourself.
If you don't care for the products or services, you could always opt not to buy/use them. That includes avoiding areas where Google appliances are present. I'm fine with trading privacy for AI-driven convenience, though I already have a phone I like and an Echo, so I have no compelling reason to buy these particular products right now.
"Excuse me, is that an android phone in your pocket?" Also, what about google self-driving cars with full arrays of cameras, radar, etc. recording everything as they drive past you?
a non-insignificant proportion of hacker news seems to actually work at google and run to their defense at anything somewhat critical of their overlord.
There just isn't any point to accusing people of shilling without evidence. The idea that one must be paid to hold a different opinion that you is offensive, and expressing it here isn't something the community wants.
i actually had 3 upvotes until the google people started to change that. so some do. i'm not accusing anyone of shilling. i've personally seen these threads and some proportion of people admit openly they work there. others, you click their profile and see they work at goog. don't need you to condescingly assume you speak for the community. Also forgot to mention all the people who defend google because they're desperate to work there
Sorry, but you don't think that it's somewhat unfounded paranoia?
Personally, I would have downvoted you because your original point seemed to condescendingly assume something about a large proportion of the community and their intentions - then you assume again that it's "The Google People" (and only them)?
Unless you can show that your downvotes came because of a specific reason, and from specific people...
my original comment was " a non-insignificant proportion of hacker news seems to actually work at google and run to their defense at anything somewhat critical of their overlord. ". it isn't paranoia. Like i said i've seen threads where 4/10 comments are defending Google and the commenters openly say they work at Google or one can click on their proifle and see that they openly list they work there. didn't imply shilling.
I fear you will be unable to recognize when that burger was your choice and when it was a reaction. You probably won't notice. And that is harmless.
I also fear you will be unable to notice in which areas of life and information the distinction between choice and reaction is harmless and which it isn't.
Of course, I'm not talking about "You" you, but just people. Me as well. I feel we are widening the field of unconscious decisions and I see that as inherently bad - in my fellow humans as well.
I'm sorry but that just sounds like blind fear mongering. What you're saying is vague and doesn't really mean much.
It's like saying we shouldn't use prescription glasses, or medication, or cars, because it's not "us".
Humans invent all these tools and systems to improve and optimize our life. Make our vision better. Make our health better. Make us move around faster. In the case of AI, make us do perform certain things more efficiently.
Imagine it wasn't an actually computer. Imagine it was a personal secretary you had that gave you the EXACT same information. Gave you your flight information, turned on the light when you asked, gave you the weather and your schedule. Would you think that was wrong? That this isn't "you"? No, it's just optimizing your life, but now available for a wider population rather than rich people.
"Imagine it was a personal secretary you had that gave you the EXACT same information. Gave you your flight information, turned on the light when you asked, gave you the weather and your schedule."
In your metaphor, you are implicitly paying the secretary, so the secretary is incentivized to maintain your interests.
How much have you paid Google for its free services?
Your metaphor is inapplicable. You don't have a secretary telling you these things; you have a salesman trying to sell you things, and the salesman is getting smarter every day while you aren't. Not the same thing at all.
That's why I called them a salesman. They sell things. Their interests are not simply your own.
It seems to be a theme here today... a company can't serve both advertisers and customers. In the end, one of them has to win, and given the monetary flows, it's not even remotely a contest which it will be. https://news.ycombinator.com/item?id=12644507
They don't sell things. They forward you towards people who do sell things which you may be interested with. You're free to ignore it, and if you're not interested in what they're showing you, that means they failed at their job.
It's funny how bad of a stigma ads have gotten, but at the core, if you think of it, it's not necessarily a bad thing. Think of a friend recommending you a restaurant, a new game to play, a movie to go watch. In that case you'll be super interested, but now if this AI who probably knows your taste better than your friend suggests you something, you are instantly turned off and annoyed.
I think the root cause of this is that there is so much mediocre ads out there that ruin it for all. Your mind just blindly blocks all ads now.
> Google is selling you, to advertisers, quite literally.
No, that would be slavery, which is illegal.
Google is selling advertising space on various channels that you provide in exchange for Google services to advertisers.
> When you aren't paying anything for something of value, YOU are the product.
No, when you aren't paying money for something of value, you are probably paying something else of value for it; often, something that the person with which you are trading is then selling for money, making you a supplier of an input to the good or service they are selling for money.
What he's saying is that this is not "humanity inventing something to make life better". It's a company inventing something to make money.
And it's not a simple product like glasses where you pay with money and then they improve your vision. It's a product which goes far beyond your understanding and for which you don't pay money.
Google isn't interested in making your life better. What they are interested in is getting you to believe that they want to make your life better and to then recommend going to that bar, because the bar owner has given Google money to advertise for the bar.
Yes, you might actually like that bar, but Google isn't going to recommend going there in intervals which are beneficial to you. They'd rather have you go there a few too many times. Because that's what makes them money. It's not improving your life, which makes them money. Their AI will always work against you, whenever it can without you noticing.
Imagine that you were trying to quit smoking and your electronic secretary kept updating you on the cheapest place to find your favorite cigarettes? With no way to tell it not to do that.
So your issue is your secretary doing it's job poorly?
First, there is a way to tell it to not do that. With Google Now, you simply tap the menu and say "No more notification like this". With the assistant, you will probably be able to ask directly.
Second, let's be honest, humans fail pretty often too, so that's just a weak argument.
Lastly, I think it's unfair to dismiss a new technology just because it could maybe fail, without having even tried it.
How about if the system is working exceptionally well, you're a depressed person, and the next ad you see is auctioned off between a therapist, a pharma company, and a noose supply store in the 100ms it takes to render your MyFaceGram profile?
The awful success cases are far more interesting than the awful failure cases.
I have no problem with ads for therapists or pharma companies competing for advertising space in front of me because they have algorithmically determined that I am a qualified lead. That actually sounds great from a mental health perspective.
I think that algorithms, and AI specifically, are perfectly able learn what distinguishes those. Maybe even better than someone who might not be in their best state of mind.
Because the whole of Google's ad business stands on people wanting to click on the ads shown, and buy the products offered through them. That's why they spend resources on detecting misleading or fraudulent ads, which by your reasoning they wouldn't care about as long as they paid. PR is very important for this business to be sustainable: If the goal was for every user to click through one ad, and then never again, that might not even pay one engineer's salary.
What's misleading or fraudulent about those ads? Maybe you mean "morally reprehensible," in which case I ask where you draw the line between the morally reprehensible (auctioning off the method of suicide to a depressed person) and the morally questionable (say, auctioning off the final bankrupting car purchase to a financially irresponsible person)?
Detecting misleading and fraudulent ads is just an example of things they wouldn't spend resources on, if following your reasoning of "short-term money is the only thing they care about."
There's not only the "morally reprehensible" metric ("Don't be evil"); there's also the "absolute PR catastrophe" metric that printing such an ad for a rope would mean.
> So your issue is your secretary doing it's job poorly?
I think the real issue is the casual deception which you just fell for: It isn't "your" electronic secretary, and the thing it just did might actually be a "good job" from the perspective of those who control it.
I'm not saying we shouldn't use AIs. We should, however, think about how we use them.
To build on your example, what are the dangers of having a personal secretary on the payroll of anyone but you?
What I am expecting from this is a super devious filter bubble - because that's how you make money. Google's old slogan "Don't be evil" is long gone. "For a greater good" might be more on point.
>In the case of AI, make us do perform certain things more efficiently.
What does the Google Assistant help me do more efficiently? In all honesty, I can't figure it out. I don't need or want a secretary, and I can do written planning for myself.
I need less paperwork and fewer web forms and identities, but the Google Assistant only promises more of that crap.
I'm never buying one. It's a sacrifice of privacy for zero to marginal gains in convenience.
Ignoring your derisive tone, the statement "most people get through their daily lives just fine without it" applies to every new technology. Yet here we are, typing away on the internet.
To my mind, leading a simple life is enjoying a burger at a restaurant/bar I frequent already. Simplicity _is_ accepting that Google algorithmically noticed a trend and just helped me do things I already do.
Yes, because when one has already decided that feature $FOO is a trap, any further discussion is likely to be limited to describing how "yes, just like a trap is designed to...so is the thing we're talking about" whether the analogy is apt or not. Something something supporting a narrative.
Calculators provide you a completely fair assistance with your query. There is zero bias in a calculator. If you ask it what two plus two is, you're going to get four.
Google is designed to sell ads, and subtly influence your behavior towards the most profitable results. Please do not confuse a fact-based tool with an ad generator.
> subtly influence your behavior towards the most profitable results
This is the very common theory that a company will (shadily) try to offer you a worse product to make more profit. It fails to account for competing companies that would jump on that opportunity to offer their better product, and get the market share.
But what's funny here is that the suggested alternative is to not get any product at all. As in: "Poor OP, didn't realize that it wasn't really him who was enjoying that burger he was enjoying."
"Worse" is often subjective. And the problem is often just the removal of the possibility of a better product to take hold. For example, Google prioritizes Google services. It gets you on as many Google services as possible. Let's use, say, that it pushes you towards Play Music when you search for songs.
Maybe Play Music is the best thing. Maybe it is not. Neither of us can answer that. But if a definitively better product comes along it will have no way to make a foothold because Google is still pushing everyone to their own product, from their other product (Search), and even when people try your product, if they use Google's other products, they'll tend to stick to other Google products.
Honestly, the worst problem with companies like Google is vertical integration. The ability to provide a wide product line where you integrate best with other products your own company makes has an incredibly chilling effect on competition, and therefore, innovation.
And if your theory that companies prioritizing results for profit would lose to companies that always prefer the best products, why is DuckDuckGo still in what... fourth or fifth place?
> And if your theory that companies prioritizing results for profit would lose to companies that always prefer the best products, why is DuckDuckGo still in what... fourth or fifth place?
You'd need to argue that DuckDuckGo's search results are better; I don't think they are. That's what made Google first among many competing search engines, before there was even a clear business model in it. Today the incentive to outperform is bigger.
If a product Y definitely better than X comes along, and only Google Search fails to rank it higher, people will start thinking "I rather search on Bing too, as it finds better products in this category".
That's the thing though. I reject the notion that you ever actually make a choice. I would posit that 100% of the actions you take are simply the deterministic reactions when the current world state is filtered through your brain. Then, after the fact, your brain gets busy inventing a reason that you took a particular action and calls it a "choice" when really you were just going to do what you were going to do anyway.
"I ordered this burger because I was hungry and it tastes good" vs "I ordered this burger because Google was able to successfully predict that I would be receptive to having burgers, or the idea of burgers, placed in my environment"
Sure, but philosophical musings on the nature of free will aside, there's a practical worry about the amount of power a private company has over your actions. I'd rather be ordering burgers because they taste good than because a company wanted me too--I expect this will lead to greater happiness for me in the long run.
Yes, but only because your happiness metric maximizes when you exercise your freedom of choice.
Other people's happiness metrics work differently, and all popular web services are popular precisely because they satisfy the unconscious desires of the majority of people.
i am no longer intrigued by the privacy discussion but the actual possibility that we are just consciousnesses controlled by the google hivemind.
this is like absolutely full on plugged into the matrix world. and we're living right in it.
these guys are like the ones who've taken the red pill, and gone on to find out how far the rabbit hole is going.
(edit: i'm even more intrigued by the possibility that the future is not just the matrix singularity, but an oligopoly of several large singularities, all fighting to plug us in)
When the AIs are working in service of corporations this seems incredibly unlikely.
We already see what happens when peoples decision making is coloured by mass media advertising. An obese population trapped by debts taken out to fuel consumption.
It is in other peoples best interests for you to work like a slave, be addicted to unhealthy habits & run up vast debts in order to buy their products.
We keep allowing those with power to distort the markets gaining themselves more money and more power at the expense of the little guy. I don't see any reason why AI in the service of the powerful will do anything but accelerate that.
Is it Google's responsibility to? I would say no. If algorithms detect that an individual is going to a bar every Monday and Thursday night, and then starts providing information about said bar on Monday and Thursday nights I don't see the problem.
But I think it would be a problem if every Monday and Thursday night Google Now started providing information about AA meetings in the area, instead of bar information. It's up to the user to make the choice, Google Now just detects trends and then displays information based on those trends.
I go to the gym every Monday, Tuesday, Thursday, and Friday morning. And each of those mornings Google Now tells me how many minutes it will take me to get to the gym from my current location. Should Google Now start giving me directions to the nearest breakfast place instead? No, not unless that starts becoming my pattern.
If you're trying to change your lifestyle, it's more difficult when you have a bad friend constantly enabling the behavior you're trying to cease.
Google may not have a responsibility to be a good friend, but personally I'd prefer not to have a bad friend always following me around, thus I'm a little less excited about this feature.
I think many would rather tell it when to start instead. What's hard about telling it to stop is when you can't tell it's started because it's something more nuanced than the obvious diet plan.
It may not be their responsibility (although if it had that information it would be the morally correct choice). However, regardless of the responsibility -- the CEO of the company saying "we're going to make your life better!" by an AI pushing products is almost certainly not going to make your life better.
> Should Google Now start giving me directions to the nearest breakfast place instead?
That may depend on how much Waffle House pays for advertising, and that is the problem.
don't you think thats a pretty severe statement wrt to free will and agency? if i'm just a consumer wired up to a machine thats deciding whats best for me (even with the best of intentions), doesn't that make me less human?
should I just be a actor playing through a set itinerary of vacations and movies and burgers and relationships? maybe you think its that way already, except less perfect than it might be, but thats a pretty frightening notion to me.
Given all the other points in life where, despite my awareness, I don't have much choice, how is an AI just directing me really any different?
My culture, education and skills limit what work I can do.
Our culture places limits on a vast number of experiences. On the road and the only thing is fast food? Welp, eating fast food. Live somewhere that only has one grocery store or cable provider?
I don't really see AI in the form Google is peddling as really all that much different. We're just 'more aware' that the world around us is really guiding us.
I may be somewhere new, and can only see the immediate surroundings without a lot of exploring. And let's be real, in the US, most cities are the same when it comes to restaurants/hotels and such. There are differences in culture but we don't usually see them if we're just visiting. Not in a way that matters.
Google will let me know that the things I prefer back home? there are equivalents nearby.
Fencing ourselves in is what we do. Who knows, perhaps a digital assistant would help us stick to our personal goals and decisions better. Rather than just having to accept what's there.
Which news-sources are you for some reason very unlikely to encounter?
Now apply a real-time AI filter-bubble, able to also include government policies in its decision-making, onto those questions.
I believe the most important thing in life is thinking. I believe a key element of thinking is looking at "easy stuff", the stuff we just live with every day and don't think about, and for some reason be forced to think about it and make it simple.
Take the Snowden-leak. We lived a nice life being the good guys and that kind of surveillance was publicly thought of as conspiracy theories. Suddenly we were forced to look at what was going on. How much of it are we okay with? On the grounds of what principles and tradeoffs? This is all very unpleasant, but we're all better off for facing those questions and work towards new principles. We take a chaotic gruel of cons and pros, and try to hammer them into a few simple principles our societies may function by. For instance, the separation of power into 3 has served us well.
I fear that we end up in a world where raising such unpleasant questions becomes almost impossible - and we'll never even notice. Not because of AI (I believe AI to be inevitable and fascinating) but because of the way AI is used.
Living a life assisted by an AI, made and paid for by someone else, seems like the epitome of naivete to me.
What I want mainly from Google is more and easier ways to customize my level of privacy. The article touches on the EFF's stance against incognito modes briefly, but it's an important one; I don't want lack of monitoring to be something I start a separate session for, with a logo of a creepy dude implying I should use this only for spying and pornography. I'd like to get as close as possible to an assistant that remembers relevant data on where I go and how long it takes, but ignores my browsing history to psychologically manipulate me into buying things--of course, that needs a different revenue model.
My experience is that Google app and system updates have a tendency to force/trick/nag you into giving up privacy.
As example, the old weather widget from Google's "News & Weather" was replaced by Google Now. That provided a similar experience for some time but then stopped working with another update that required search history to be enabled and/or some other setting in privacy control.
Also the launcher integrated Google with a system update(Moto G line of phones). I have since replaced launcher, browser, search app (all with open source replacements), weather app (with a paid service). Convenience has suffered...
Indeed; after a time they required search history to be turned on in order for commuting traffic information to be supplied, which was not initially the case.
At that point I turned it on but deleted* the search history each day, until such point as they changed the delete controls to be more of a nuisance.
There is a difference between 'the user requested to deactivate the service', and 'the user paused the service', he may or may not wish to continue using it, gray area right?
It can probably save them from a legal mess if they 'resume' it in future updates.
As was mentioned below, in Google's case I'm particularly looking for ease of use and a UI that lets me turn things on and off quickly and intuitively. That might sound odd, as if I care deeply enough about my privacy to complain about it, but not enough to quickly switch the settings on my Google account, but I do think it matters.
The more specific work I need to go through to set up my privacy, the less inclined I am to do it. If I didn't think I was able to be manipulated psychologically in this way, well, I wouldn't worry about advertising at all! If I were to ever do something politically dissident/personally embarrassing on the internet (not that I ever have of course) I'd go to the trouble of ensuring encryption and being hard to track, but I think it's important that I'm able to say to Google "Hey, I'm cool with you telling me when I should check off work to hit the bar, but it's super weird that you know what I should get my Mom for Mother's Day."
Of course, the simplest way of making a system that's both fine-grained and intuitive might involve... more AI, so I'm not sure how to crack that issue.
The link above is very intuitive. One toggle disables search history in all Google products, another one removes location history, etc. You can also click on "Manage activity" for more fine-grained details in some apps.
Did you check that link? It looks fairly intuitive to me. Given that I'm having a hard time parsing your response. However if there's some specific setting in there that's not available, that would make sense.
I think each individual action is fine, but I don't understand how it's easy to manage privacy as a whole. Each aspect of privacy is separate, and can be finely customized in one particular way per session. There's no way to set different arrangements of settings for different times, or even just a button on mobile to whack that indicates "Ignore my data for the next few minutes."
I think eventually they add so many capabilities and so many fine-grained controls and it becomes impossible to manage the UI or to find the right options. Even looking at the settings for Android privacy / settings, it's pretty hard to find anything.
This is by design, so that the majority of users are confused and leave the defaults as is, enabling Google to do whatever they like.
When it first told me it knew hiw long my commute would take, I realised it was creepy as all hell that the people (in another country with few protections on data) providing my phone software knew enough about me to tell where I worked and when I was going there.
And it annoys me that on maps, when you turn off all the spying capabilities there's no fallback to local history. You either share it with us or you get none.
Exactly. You're expected to naively trust 'the algorithm', because people are nowhere to be found near your supposedly anonymized data.
Speaking of which, anyone following today's stories about the Yahoo email scandal, the pressure on the folks who own Signal, and recent litigation from Microsoft against government gag orders?
But, let's go back to talking about none of us have free will and talking about how clever Now is.
Failing to provide local history is essentially one of the dark patterns to getting you to turn on their privacy collection. Most things Google requires it for could easily be done outside the cloud, but by making things depend on the cloud, and then telling everyone you can only do it with the cloud, you convince people that they need the cloud. When in reality, they never did.
GPS navigation devices with much less storage than a phone have been more than capable of what Google Maps offers for a long time. There's essentially no reason for it to do anything with the Internet except getting map updates.
I use Google Maps every single day to get to and from work, simply because it knows how to avoid traffic. 10% of the time it saves me half an hour on my commute.
It may be OK for you but there are at least three real concerns here:
1- There is no way to set your privacy level.
2- Things that Google/Siri/Alexa know about you are not limited with the name of the bar you go frequently. They know much more about you. And you don't know what they know. The sky is the limit here.
3- Things that they know are not limited with you personally. They know about you, your family, your friends and all their interactions. They know very much about the whole society.
1 - Sure there is, Google has fairly fine-grained tracking control. Not perfect, but as another commenter noted, this is a double-edged sword, as _too many_ controls can conversely hinder user control (see Facebook's privacy revamp)
2 - My point is that I personally am OK with Google's AI knowing more about me. I respect that others aren't. I'm not naive in my acceptance.
Sure there is, Google has fairly fine-grained tracking control.
The privacy control where I disable location tracking and half a year later when I look in Google Dashboard I see months of travel history?
I respect that others aren't. I'm not naive in my acceptance.
So what do you do in a situation where your use of Google's data collection also affects people who do mind it? I would not be comfortable visiting a friend with an always-listening device like Alexa or Google's equivalent.
I nuked my paid Google Apps account a couple of months ago. I had enough of their total disrespect for privacy. E.g. conversations that I had in Google Mail (which is protected by the Google Apps agreement) were used for suggestions, etc. in Google+ (which is not covered by the Google Apps agreement and uses data for targeted advertising).
> So what do you do in a situation where your use of Google's data collection also affects people who do mind it? I would not be comfortable visiting a friend with an always-listening device like Alexa or Google's equivalent.
I'd turn it off if/when they ask. I don't think that's unreasonable in the least. I'm not responsible for enforcing everyone's privacy preference, but I also respect them and will accommodate guests in my house.
I don't know about Google, but I know that Siri and Alexa only collect and send data when you ask it to.
You can monitor the traffic of the Alexa and see that it is only sending data when you ask it to do something, and furthermore, Amazon gives you a log of everything you've said to it and it recorded.
And you can also get a log of everything you've ever ordered from Amazon, but there are still loads of other signals which aren't visible to you, as evidenced by the fact that you browse a product on amazon.com today, and now for the next week, you see that product advertised back to you all over the web.
Others may be reviews left, reviews voted on, prime video consumed, audio/video/book samples consumed, kindle activity, how long you spend on a product page, how you scroll on it, the breadcrumb of how you got there, and surely dozens more.
"Hi Ubercore, Google and your health insurance company here. We are worried that you are frequenting a facility that serves too much alcohol and wings. We care."
I really wish this was up top. I can't believe that the top rated comment on HN is a thinly veiled "if you don't have anything to hide, you don't have anything to worry about".
My impression is that (in the main) younger people have lower privacy thresholds than older people. Not for everyone (of course). Just on average.
My impression also is that most early adopters of this kind of technology are younger people. (again: mostly)
So this brings up an interesting question about the future. As the young early adopters age, what will happen?
a) their privacy thresholds will also increase and they will have a "oh holy crap" moment in the future, where as a middle-aged or older person, who has lived a now much richer and problem-laden life, they will realize that google (and/or other co's) have what they consider now, as too much personal information about them,
or
b) they will keep their young-ish privacy thresholds as older people, and in general, across society, people will have lower thresholds than exist nowadays. In other words the world will change.
Conversely, there are a fair amount of older folk who are very much okay with government surveillance and a general lack of privacy (particularly our inherent rights) because, e.g., they have "nothing to hide"
My impression (in the main) is that younger and older people have different views on privacy. Older folk might be creeped out by Google knowing their schedule, but okay with the NSA or FBI or whomever reading their emails because "because terrorists" whereas younger folk are more likely to balk at the latter, but very much okay the former.
Do you think my opinion is accurate? I'm curious because to some extent I completely agree with you.
I don't think A will necessarily happen for most. We are a product of our experiences. If, as we grow up, we get comfortable using always-online technologies and never suffer any consequence from those experiences, I don't see what would motivate us to suddenly doubt these technologies. I am confident B is the most likely situation; that's how societies move forward so quickly with tech.
Devil's advocate says that if you want B to happen, you should start working on more Ashley-Madison-style hacks, to enable more people to have their "oh holy crap" moments.
My experience has been quite the opposite in terms of convenience and relevance: I commute by car and train (for some parts of the same journey) and google Now, google Maps (etc) have been totally useless there: telling me about traffice jams when I'm in the train, not telling me about train delays, etc... it now somehow thinks my home is at the train station, it tells me the last bus home is leaving soon when I've been home for hours, also Google Now's insistence on bombarding the leftmost pane on my phone with the most click-bait articles ever, often about things I had a passing interest in months ago, is just laughable.
I would be glad to give Google some of my very precious privacy, implementing some countermeasures like multiple and burner identities as needed, if I thought they had any chance of actually providing real value. So far they have failed miserably, I am not sure the economics of providing a really useful service there for free just with marketing information as source of income work now, or for a very long time. You'd need strong AI to actually help my day to day life, with solid non obvious guessing based on many very local and specific factors. I guess as long as people kind of believe that this future is coming, they may tolerate the invasion and forget about the promises.
But as mentioned above, if you don't want Google to potentially track your behaviour and preferences, "don't use their services" encompasses "don't send email to anyone with a gmail address".
Which is the digital equivalent of "Don't walk out your front door".
My whole point in posting originally was to bring up the fact that the interesting discussion of different thresholds and having a spectrum of options is being lost in binary "privacy vs complete corporate control". It's not binary, and it does everyone a disservice to act like it can be for everyone.
We live in a world of Privacy Theater. Everyone is always complaining about "their privacy" as if they still have any, and it's kind of amusing.
Your phone's microphone can be turned on remotely and listen to what you say (I know several startups that do this). Security/traffic/drone/satellite cameras are everywhere. You are being watched literally all the time, but to think the watchers actually care about your personal life indicates a pretty inflated view of self-importance. We're starting to complain about it about 20 years too late.
Which IMO is a valid thing. If you are sending something to someone, they are free to do what they want with it. You can't force them to treat it differently.
If i'm on gmail, then you need to talk to gmail to talk to me. It's a tradeoff you'll need to make. If you want you can GPG encrypt the email, but there is nothing from stopping me (legally, morally, or otherwise) from just decrypting it and replying with the contents, or saving the decrypted message in my google drive.
I dunno, it's not quite that black and white. If I send someone (say) a poem I wrote, even if unsolicited, I retain the copyright and the moral rights in that poem. The recipient doesn't get carte blanche to reproduce it just because I sent it to them.
So, by the same token, does Google have the rights to profile people even if they haven't consented to that? One answer, as with copyright, is to see what the law says; and it's very possible that the law says no, especially in the EU. (I've not researched it; others will doubtless know much more than I do.)
Or Google could just seek to do the right thing and not profile people unless they've opted in. But my definition of the right thing may well be different to theirs.
The sad part is that the user turned off Google Now because he didn't want google to know about the bar he visited. Google was tracking and recording his location before Google Now, he just didn't know it. It's still tracking it after he disabled Google Now.
Yeah, but that is also the equivalent of 24/7 surveillance of all locations you visit. Google will end up figuring out whom you sleep with, etc. from that information.
Pretty much your only privacy is in your head at that point.
I'm not sure that is a "threshold" of privacy but rather a "I am okay with 24/7 surveillance of all of my activity."
hypothetically: you express radical political ideas to your friend with the expectation of your statements remaining in confidence-- but google was listening. now, your feed recommendations steer you further down the path google thinks you were already on. you are ready to attend a protest and perform civil disobedience, as google now knows based off of your interest in what it has been suggesting to you. it suggests (as facebook does now regarding making events for birthday parties etc) that you and some other people form that protest, and, because it said so, you do it. except it's a trap. the police's google feed tells them that some undesirables have planned a protest, and you're imprisoned.
is this story unrealistic, or has it already occurred?
I remember the huge smile I got on my face the first time Google Now picked up on the fact that I went to the same bar every Wednesday evening.
One Wednesday afternoon, at work, I got a notification saying "Travel time to the Lion & Crown". The first thing that ran through my head was "oh my god, I'm living in the future".
I am actually quite uncomfortable that my stock Android is making suggestions on how long it should take for me to get home or work (when I have never explicitly mentioned that it is my home or work).
The problem is that I want to use Google Maps so what choice do I really have?
Sure I use a dedicated gmail for my phone but that really does not help much.
I would not want Google knowing I sometimes drink too much, or that I do so and get behind the wheel of a car. Easy inferences it could make, given the time I spent at the bar, and the purchases on my credit card. That could even have consequences for the cost of my auto insurance. Edit: and health insurance.
I actually thought the most interesting point this article raised - for me at least - is the implict branding associated with the "OK Google" command. All privacy concerns aside, if I'm going to have a "personal Google" I want to be able to thoroughly personalize it.
but I think the fact that it's different for everyone is being lost in articles like this.
I think that it is different for everyone is completely and utterly obvious. It's clear the author doesn't think it's OK, but that it's his/her opinion.
This location history is really bad though. I added a test gmail account to my device a few weeks ago, but didn't remember location tracking was a per-account setting - now Google has nice big logs to hand whoever wants them and I can only delete them on a day-by-day basis (from the Android app at least).
Extremely annoying. This sort of thing should not be acceptable, an honest mistake results in every place I've been being logged in such a way that anyone with access to my Google account, access to Google servers or with a subpoena can have my full location history in a matter of seconds.
This needs to be a big red option every time you add an account "we're gonna log everywhere you go and hand it over to whoever we feel like, you cool with that?". It'd be different if the log and analysis were done only on my device, but doing this on Google's servers is completely unacceptable by anyone with even the weakest standards of privacy.
What makes you think that deleting location history actually deletes the history? Reminds one of the group of students called Europe v. Facebook [0]:
Schrems described the file obtained through a legal request as
a 500MB PDF including data the user thought they had
deleted. The one sent through a regular Facebook request was a
150MB HTML file and included video (the PDF did not) but did
not have the deleted data.
Yeah, that's what I remembered, but it seems they replaced "Delete Location History" with a "Manage Activities" button which opens Google Maps and shows you location history, allowing you to delete it but only one day at a time.
EDIT: Okay, found it. Follow your instructions then open that "Manage Activities" -> Menu -> Settings -> Scroll down -> Delete All Location History.
You _must_ do this for every account if you have multiple.
I guess it could be both but I view it first as a setting of the account since you can also be location tracked perhaps by a browser, or your Wear device? That would be my first go-to on that setting but I guess it can be seen both ways.
just having a mobile device you're being tracked in Australia now the Government now also has access to this data from the network providers the only way to be free is to not carry these devices
What I think is interesting is that many of us nerds have probably innocuously fantasized about having a Star Trek-like AI assistant with us, but now that they're taking the first steps towards that, we're starting to realize that in order for it to do everything for us, it has to know everything about us, too.
Nobody was thinking about "the cloud" back in those days. Back then, your data, you programs all lived and ran on your own computer in your home. Most people didn't go online, and if you did, it was mostly to read and download data to use locally on your own computer. Connections were intermittent and slow. The idea that your own data would be stored online was almost unimaginable; even using network-depending applications like usenet or email involved downloading everything first before using it. Online applications were hardly even dreamed of.
Our expectations of how "Star Trek AI" would actually be implemented were completely different than how highly connected cloud-based services like Google Assistant work today.
Anyway, the point being, if the assistant lived entirely in your own computer, it would entirely different. Most people are not concerned about what their "computer" knows about them, they're concerned about what companies and their employees do.
> Our expectations of how "Star Trek AI" would actually be implemented were completely different than how highly connected cloud-based services like Google Assistant work today.
The trope-namer (Star Trek AI) was a ship-wide AI - when considering the ship sizes, it definitely is closer to the "cloud" model and not limited to a private instances on officers' bunker/bridge terminals/tricorders. Perhaps a hardcore Trekkie could answer this question: is there any canon that defines the AIs scope? Is restricted to just one ship, or could it possible be a Federation-wide presence with a presence/instances on ships?
The AI for the Enterprise D was run from three computer cores on the ship (two in the saucer, one in the engineering section), made up of Isolinear Chips subjected to a small warp field (iow, it makes computes using light faster than the speed of light). Subspace communication bandwidth is too limited (and potentially affected by latency, since it had to travel through repeaters throughout the galaxy) to provide realtime cloud computing as we experience it.
There are some cannon exceptions to this (such as in Nemesis where the subspace communication interruption affected the star charts), but even then the functionality of the ship was not impacted.
The Star Trek ships are very analogous to our own ocean-bound ships, where satellite communication is possible almost anywhere, but they don't rely on it.
So, yes, the AI is completely confined to the ship.
What about when someone was (permanently) transferred between ships?
Was there ever an indication that their AI-level data was transferred along with their personnel file? For example did the replicator know what food to offer them on day one?
If so, then it's seems reasonable to assume that the Enterprise's AI data was backed up at Federation HQ during routine maintenance, and that the "IT department" at Federation knew exactly what you liked to do on the Holodeck.
Through specific indication from the user. Recall the constant utterance of "tea, earl grey, hot"?
Ultimately, I imagine the user's information (documents, etc) was passed directly between ships, or through (as you say) Federation HQ.
> Enterprise's AI data was backed up
Ultimately, I think this is where AI will differ from ML. An AI won't have data that isn't a part of the AI - i.e. you couldn't separate out information specific to Picard from the rest of the AI code. An AI might be able to "scribble" down some notes about interacting with Picard and pass them off to another ship's AI, but the second AI would never treat Picard quite the same way as the first, even with those notes.
This stems from my belief that how ML interprets data is different from how an AI would. If you were to copy all of the data used to build a ML model and apply it again, you'd end up with the same ML model. An AI, on the other hand, if built twice twice from the same data would end creating two separate AIs.
I never got the feeling that there was a lot of mistrust of other federation people.
for example, the Star Trek universe didnt seem like a universe where you had to shop around for a trustworthy mechanic, who wouldnt overcharge or over diagnose. (e.g. headlight fluid)
Maybe the implicit trust of other people was integral to the AI being successful in that universe.
It's definitely a per-ship thing, because there are several episodes where they have to down/upload updated data to/from the Federation (at least in TNG canon).
I also think the comparison isn't perfect, because Federation vessels (in my mind) are similar to today's Navy vessels. All onboard systems are connected to other onboard, but opsec demands the ship systems not be influenced by external actors.
And a big point is that on a Navy-analogous vessel, it's reasonable to assume most activity on the ship is being monitored, for the safety of the ship and crew. The fact that the ship knows where everyone aboard is or that they can pull up the full Starfleet record for anyone who's ever been in Starfleet is not surprising, this is the military* and records and accountability are a big deal. But there's nothing to indicate that Federation civilians are monitored to that extent, and I'd argue enough episodes are strong on fundamental individual rights, that it's hard to imagine Federation life for civilians being a surveillance state.
> But there's nothing to indicate that Federation civilians are monitored to that extent
The clearest example of extensive off-starship monitoring (within the Federation) that I can think of in TNG is a civilian (though, to be fair, a civilian in a role analogous to a "defense contractor"), Dr. Leah Brahms.
> I'd argue enough episodes are strong on fundamental individual rights, that it's hard to imagine Federation life for civilians being a surveillance state.
Actually, I'd say that its quite plausible that the Federation is a "benevolent surveillance state", that is, one with pervasive monitoring but a very low incidence of "serious" abuse (that is, the kind that substantially limits practical liberty -- casual intrusions on privacy may be more common.)
While the Federation seems keen on "fundamental individual rights", it doesn't seem to exactly mirror, say, some modern views on what those rights are -- and not just in terms of privacy.
I saw you give the Brahms example in another comment before I posted, so I am not surprised to see you bring it up. But I think you answered your own point. She worked heavily on the Galaxy-class starship's warp drive, which would be a relatively classified project that they would be very unhappy if the Romulans or any other hostile parties were intercepting data on.
And arguably, if she was working at Utopia Planitia Fleet Yards on the Galaxy class project, she presumably worked on a Starfleet orbital facility (technically, a number of facilities) over a span of years, where certainly enough data would be collected to make a poor replica of her personality, as in the show. I don't see anything suggesting specifically outside of basic biographical data, that she was being monitored in her civilian life.
La Forge, having interacted with that hologram extensively, and having surely read Starfleet's records... apparently didn't know she was married.
I feel your message is the most important in this thread because it's the crux of the whole concern about privacy and the cloud.
Where technology has failed us most is in the utterly stagnant evolution and maturation of secure private networks. The following is a utopian notion, but had private networks seen as much R&D as the public clouds, they would be significantly less cumbersome than today's clunky VPNs. Imagine all of your devices collaborate directly with one another and with you on your own secure private network—no central cloud servers needed. Your personal assistant is software running on a computer you own rather than a third-party's centralized server.
I still feel this ideal will eventually be realized, but for the time being, no large technology company is willing to take the necessary risks to buck the trend of centralization.
The biggest fiction propped by up centralization and cloud proponents is that it would be impossible to provide the kind of utility seen in Cortana, Siri, Google Assistant, Alexa, et al without a big public cloud. A modern desktop computer has ample computational capability to convert voice to text, parse various phrases, manage a calendar, and look up restaurants on Yelp. Absolutely nothing the public clouds provide strikes me as something my own computer would struggle to do (to be clear, I would expect a local agent would be able to reach out to third-party sites such as Yelp or Amazon at your command in order to execute your desires, but they would do so directly, not via an intermediary).
A few years back, when Microsoft was at the beginning of its Nadella renaissance, I had hoped it would be the first technology titan to disintermediate the cloud and make approachable and easily-managed personal private networks a thing. Microsoft's legacy of focusing on desktop computers would have made it well-situated to reaffirm your home computer as an important fixture in your multi-device life. They could have co-opted Sun's old bad tagline: "Your network is your computer." But they elected to just follow the now-conventional public cloud model, reducing everyone's quite-powerful home computer to yet another terminal of centralized cloud services. Disappointing, but I think it is ultimately their loss. I suspect a lot of money is on the table for someone to realize a coherent easy-to-use multi-device private network model that respects consumer privacy by executing its principal computation within the network.
>Where technology has failed us most is in the utterly stagnant evolution and maturation of secure private networks.
Not just secure private networks, but secure and programmable personal computing in general. The amount that I can actually do with my workstation PCs, let alone laptops or mobile phones, is now thoroughly restricted compared to problems that require a full-scale datacenter.
I originally enjoyed computing because, so to speak, it was an opportunity to own and craft my own tools, rather than being forced into the role of consuming someone else's pre-prepared product. Now we're being boxed into the consumer role in computing, too.
People at work keep asking me why I reinvent a few wheels here and there on my personal projects. "Why are you wasting your time with WebRCT? Why are you not using phaser.io?"
idk man. Computers are powerful. I like seeing what I can do with them.
And with UEFI-level code on mainboards and baseband software on phones, the era of "owning" a computer is basically over. All you can do is hope and trust that the manufacturer isn't co-opting your experience or data somehow. As someone who grew up hacking on a C64 in grade school, and never stopped, I find this utterly depressing.
Let me preface this by first saying that I absolutely can't wait to have my own personal home automation, AI assistant, etc. on prem without the cloud:
I think that as far as the nascence of these features goes, the cloud model will beat the on-prem features any day of the week for several reasons. Lack of configuration to set up, ease of use from anywhere without network configuration, etc. are table stakes. But the biggest at this point is the sheer amount of training and A/B testing data you can ingest to determine what is useful for your end users.
The velocity of cloud-based products is nothing short of amazing and I doubt that on-prem will compete with the feature set and ease of use of always connected solutions until there are feature-complete, mature cloud versions to then bring in.
As we just learned with Yahoo, though, once the ML models have been trained, they can be disseminated and used without the need for "cloud-scale" data or compute resources.
And, for better or worse, Dragon's text-to-speech is pretty damned good after a rather minimal amount of training.
I don't think there's anything stopping voice and intent recognition from coming back to our personal machines other than the ability to keep making money from having it come up to the cloud.
The cloud is all about scale and only having to rent resources when you need them. If you have your home server you have to buy and maintain and pay for those resources at all times. When you make a quick cloud request you only "pay" for the resources you consume.
When I was working on Google Search what really astounded me is how we could leverage hundreds of machines in a single request and still have virtually no cost per search. The reason was that each search used a tiny amount of the total resources of those machines and for a very short time. A total search might have (made up numbers) one minute of computation time, but spread across 200 machines it only takes 300ms from start to finish.
That's the benefit the cloud will provide. You don't want to have a 1000-machine data center available at all times to store billions of possible documents and process your requests with low latency. If we went to a private-network model I fear that the turn-around time would be a lot closer to a human assistant. You'd ask it to do things and then it would get back to you sometime later (seconds? minutes? hours?) when it had finished it's research and come up with an answer.
> The idea that your own data would be stored online was almost unimaginable
Except that's what I did for many years using a computer only as a terminal for an AIX mainframe. My mail was there, I browsed what was the web, used gopher, wrote programs, all stored there.
On top of that, the cloud we have now is commercialized, opaque and constantly under pressure to comply with a government that many distrust, for what I would say very good reasons.
I would like to say that the cloud we have is a privacy concern because we don't know the full scope of data collected, nor what happens to it, nor do we own any of "our" data once it's in the cloud. But not every cloud would have to be that.
There's a perfect world where one wouldn't have to be paranoid about this stuff, but it's not what we have right now.
This right here. When I first heard about android, I imagined an open system I could tinker with like I can on my PC.
Instead I get the current evolution. I want a 3rd party.
The same thing comes to my every day of usage, I'm still on win 7, and short of leaving Linux(yes I should have already), I can't upgrade without becoming a product.
I want what we all imagined and dreamt of, and I pay for it.
What I won't do, is become a product.
Instead I'm stuck with multiple fake accounts on Gmail, using a pseudonym on everything(including programming contract sites such as upwork), just to keep some semi iota of privacy, and to enjoy the benefits of what we all want.
Imho we need a new major party to emerge, that will charge an initial fee(like windows 7), and let us do what we want with those services(with caveats of course).
But my main coding(admittedly amateur and earning very little) uses .net. on top of that most of the games I play to relax are Windows does only(as far as I know for most).
O keep meaning to make time, but it just hasn't happened yet. I don't get paid as a programmer (English teacher), so I need to spend my free time earning money on what I know.
As always, one day when I have money to spare(or time, which is basically the same thing haha).
Without trying to look like an arse, why? I have been on pc since dos, I can hack my way around any pc.
I will fully admit I have not spent enough time in Mac to figure out the file system, but from what (little) I have seen, your not in control.
I like my pc because I can see what sub-processes are running, who is taking up how much memory, install things to where I want and if worse comes to worse, manually change how windows runs. (I apologise if you can do all that in Mac, as I said, I don't have the experience certificate - I bounced off it hard).
So go Hackintosh. At least you don't have to distrust your OS vendor.
To answer your questions, you're in complete control with macOS, you can turn off SIP, turn off Gatekeeper and install whatever kernel extensions you want. Apple doesn't snoop on you like with Win10 telemetry.
The problem with Mac is that one needs to have Apple hardware, you can't even change your RAM sticks to the ones that Apple didn't approve. There is nothing (apart from maybe some fancy Adobe software) that you can't run on Linux just as well or even better.
Maybe people could imagine a world where the adage "if you aren't paying for the product, you are the product" would be widely relevant, sure.
But not a world where that phrase would be irrelevant, simply because today if you're paying for the product, you're still the product.
I think there's an additional nuance, that of Google knowing everything about us. If I hacked together my own home automation AI system it would need to know everything about me too and that worries me far less.
It's like saying "People would never break into a bank, when they could break into someone's house and steal their stuff"
It may be easier to break into someone's home-brew system, but generally it would be unlikely to happen unless you were being otherwise targetted. Whereas google has a lot of users data, which could make it a more attractive target.
It may be easier to break into someone's home-brew system, but generally it would be unlikely to happen unless you were being otherwise targetted.
Just how "home-brew" are we talking here? If there's any web-facing code that you didn't write yourself, whether commercial or open-source or whatever, that's a target for attackers that just scan everything looking for known vulnerable services.
If it is entirely custom, I'm fairly sure there are a few classes of common security errors that can be reasonably well tested for without direct human involvement. Which brings back the threat of attackers just scanning for all available targets.
If you're a "person of interest", you'll be attacked either way.
If you're just a regular person, like me, you won't likely be targeted. But my chances of getting a collateral damage are much higher in a centralized system: when it's broken into, my data have a chance of being siphoned off along with data of someone less ordinary, who was the target of the attack.
It's wider than this. A "person of interest" will be attacked either way. A "normal boring person" won't be targeted directly, but may be swept up in an attack on a "person of interest".
But there's a third case: "normal boring people" become interesting just by being together in a big group, even if no particular "person of interest" is among them.
only if you hacked together your own search, maps, calendar, etc. as well.
Maybe someday that will be a realistic endeavor, but it would take a lot of effort to set up and maintain your own personal versions of all of google's services, and integrate them
I think it's possible to create a hybrid whereby your personal AI - has access to those external systems to check public (maps, search) and cloud data (calendar, mail).
But it stores the private data about your location, your searches, your purchases. this could even be an encrypted private fire-walled bit of cloud rather than a robot in your house.
The point being your AI is serving you. And you can delete/edit this private data (or even the whole AI) if you wish.
This is spot on. I'm okay with having my home automation system tap a weather API to collect data. But I don't need an Internet service to know the thermostat settings in my house.
I've designed my home automation system on the concept that the only route to the Internet is my computer (and therefore, my home automation software). My computer is a secure, well-managed intermediary that can store my data, and decide when and how to receive and send data to the Internet.
The idea of dozens of Internet-connected devices in the home is :terrifying: in comparison, especially considering that badly-secured IoT devices are now powering some of the biggest botnets out there.
My light switch, however, cannot talk to the Internet. It has local-only communication protocols that are simple. It knows how to be told to turn the lights on or off or dim or a handful of other settings, but it's literally incapable of doing anything else... and why would I want anything different? Why should my light switch have Bluetooth and Wi-Fi and software updates and a miniature flavor of Linux... It's a switch!
Your description sounds like a very intelligent cache/proxy.
When my AI can talk directly to your AI, we can transfer mail etc without cloud services. If it knows our social network, perhaps it can remotely store encrypted backups of our data, but only with people who we already trust.
Hosting your own mail, calendar, cloud storage, and even a mapping application is quite doable right now. There are open-source projects for that, and you can have your own physically secured machine in a co-location facility for a reasonably small sum.
It's just not easy to do, both in setup and maintenance effort. Most people would not care about their privacy nearly enough to justify that.
My recommendation for people to look into here is Sandstorm.io. It runs either on-site or in the cloud, and it already can replace a lot of what one uses cloud services for. In time, hopefully open source projects will develop "intelligent" apps that can then tap into that data and do more with it, still within the confines of your secure, private storage.
You don't have to develop, setup and maintain all that by yourself. What we need is a user friendly (dead easy install/setup, no administration) way to use alternatives to these services. Some of these services exist, some not, but it's a very achievable goal.
Well the missing part is the dedication to ideals and to the greater good of all life that was supposed to be core to the federation. I realize Star Trek is fantasy, but the reason people are more at ease with the omnipresence of technology in Star Trek is because you see people living by these humanitarian and noble ideals. People fight and die to defend thems. The right to self and privacy and protection is held very high in the st universe, even if challenged.
When's the last time
Google risked itself or business or any tech ceo risked their livelihood for the sake of the greater good? The problem isn't necessarily the knowing everything part, it's who does what with it that's the problem. I can't really think of any company or person with influence in tech that'd be willing to dive onto that bombshell to protect us all.
The former CEO of Qwest, a massive telco, spent years in prison for insider trading. He says this is because he resisted the NSA's demands to tap Qwest's network and hand over customer data.
The big difference is the Star Trek computer wasn't using its data about Kirk to provide him "enhanced advertising experiences", there wasn't a big corporation controlling the computer and no government was accessing the computer's information.
A truly user-aligned AI assistant would be great. Ideally in the future these things will not be tied to indirect business models, but rather will be something you buy and all data/services will be under your control.
Capitan Kirk was a government employee. It's implied the government/StarFleet could access the ships logs.
In the Star Trek world they had no advertising because they were a communist society. Everyone dressed the same or slightly differently based on rank. It's interesting how the new movies play over that.
In Star Trek you couldn't choose your AI. In our world you can. At the start of their development most of them are targeted at selling you stuff - but the industry is young and who knows where it will go.
It would be more accurate to say that most of Star Trek depicts life in Starfleet, a pseudo-military institution (Roddenberry insisted that it isn't, while war isn't Starfleet's main purpose it shares enough traits with modern armed services to suffice for this discussion). There is always a lesser expectation of privacy for military personnel, even in the US armed services.
While there are elements of communism depicted in the Federation as a whole, those are facets of a global post-scarcity society that has somehow evolved beyond the less "progressive" bits of human behavior. I'd argue that the biggest fantasy of Star Trek isn't warp drive, but the notion that humans are somehow less violent than Klingons.
Indeed, their efforts to run the "Klingon culture is bad" thread got comically awkward in TNG, where they also really hammered on the multiculturalist "all cultures are equally valid" theme. All cultures... except for Klingons, whose barbaric and violent customs are just obviously inferior.
Yes. Also running the AI on local computer made sense because there is no incentive to run it on cloud. Since nothing is gained. Star Trek is set on post scarcity economic society. We will get there once basic income becomes the norm in all countries.
Has Star Trek even a currency? It always feels like a socialist/communist thing to me.
You cannot compare the world's biggest seller of advertisement space with the ST universe. The motivation's aren't aligned: Google/Alphabet want to make sales based on my information.
I agree that I found these oh so clever AI fantasies interesting in my youth, still do to a degree. But I always pictured the data being held inaccessible to humans in general ("Where's my wife right now?") and not in the hands of a golden few with no oversight.
Star Trek's world seems to be a 'utopia' of scientific-military governance. Most of the key players have a military rank, wear color coded uniforms, and appear to be under 24/7 surveillance (which is OK since this is a very nice and progressive scientific-military goverment and you know War On <scary-alien-specie> and all that. :)
I don't remember seeing any Star Trek episodes that showed people under surveillance in their private quarters or in bathrooms.
The public areas which were under surveillance on Star Trek tended to be only on military ships and star bases. I don't remember seeing much surveillance in public areas on planets. There certainly wasn't the sense that everyone was under surveillance on every street and in every shop, unlike most people in major metropolitan areas on Earth today. Nor was there anything on Star Trek like the ever-present spy satellites that can see in great detail anywhere on Earth today.
For the public areas which could be observed through cameras on Star Trek, the surveillance seemed mild compared to today because of Star Trek's lack of massive computers and artificial intelligence analysing what is seen for anomalies, using facial recognition, constantly recording everything and having those recordings instantly available for playback, sophisticated search, and computer analysis.
The reading and viewing habits of Star Trek denizens weren't recorded and analysed, unlike those of many people on Earth. Their positions weren't tracked wherever they went, unlike those of many people on Earth.
The so-called "24/7 surveillance" of Star Trek was very limited and even quaint compared to what we live under on Earth today.
In the original show there wasn't a currency but in order to have aliens who exhibit avarice and the worst part of capitalism they had to include a currency (latinum, I think it was called).
Incorrect, in the original series, there was a currency ("credits") that was explicitly referenced several times; it was also referenced in at least one, and possibly more, early TNG episodes.
Sometime in the TNG era, Roddenberry laid down an edict that money, including the "credits" that had been repeatedly referenced previously, did not exist in the federation, and so they weren't mentioned again.
> Currency was later re-introduced to enable the Ferengi race to be portrayed as greedy merchants.
I don't think that's really accurate; the Ferengi were portrated as greedy merchants focused on profit starting fairly early in TNG without direct reference to currency (gold -- not the later "gold-pressed latinum" -- was mentioned, IIRC, as an item of interest, but not in any context which implied it was used as currency); I think gold-pressed latinum as introduced as a currency in DS9 because DS9's role as commerce hub was central to the theme of the series, and having currency just made telling stories about that a lot more convenient.
It wasn't a currency in the same way that gold isn't a currency. Latinum is supposed to be a substance which cannot be replicated unlike most things so it's valued by, as you say, the avaricious.
> It wasn't a currency in the same way that gold isn't a currency.
Throughout much of history, gold in standardized sizes was a common form of currency. Gold-pressed latinum in standardized "slips", "strips", "bars", and "bricks" is exactly the same thing.
Post-scarcity in some ways, perhaps. In many other ways, they're not.
Star Trek still had merchants who sold various wares. That would not be profitable if nothing was scarce.
They still had planets that lacked necessary medicine, requiring The Enterprise or some other ship to go on mercy missions to deliver the meds.
The Star Trek universe had pleasure planets which had highly desirable things that other planets did not.
There was clearly a shortage of starships and crew, as The Enterprise explored alone and not in a fleet, and couldn't just create a hundred others to help it when it was attacked by some alien enemy.
The Enterprise couldn't even use their on-ship replicators to make themselves some dilithium crystals (fuel) when they ran low.
It doesn't though, does it? The only reason this is a problem is that Google's business is still advertising and they act like our problems can be solved by tools made to sell more ads. The moment I could buy an AI service for like $10 a month (it had to be good), I'd trust them with using my data responsibly.
The Star Trek fantasy is, "Computer, what were the principal historical events on the planet Earth in the year 1987?", and it could totally answer that without sending your entire fucking message history to google for deep AI inspection.
> The Star Trek fantasy is, "Computer, what were the principal historical events on the planet Earth in the year 1987?", and it could totally answer that without sending your entire fucking message history to google for deep AI inspection.
That's part of the Star Trek fantasy. But so is, "Computer, locate Commander Riker" and "Computer, use personal logs and personality profiles from compiled databases to create a personality simulation of Dr. Leah Brahms."
I think people also forget that the Star Trek AI was in a semi-militarized scenario where efficiency and information greatly outweighed individual privacy needs.
I think most fantasies are okay with the anthropomorphic AI assistant knowing everything about us, but don't involve the AI transmitting all of it's data back to "the cloud" where advertisers can mine this data or the NSA could listen in with a secret gag ordered wiretap. Probably wishful thinking, but maybe one day a privacy first company will dip their toes into this arena.
Google doesn't "spill their beans" to third parties-- what Google actually sells is the opportunity for third parties to be included in the advertisements Google is targeting to their users.
Google has a strong incentive to not allow their aggregated user data to leave Google-- the behavioral data Google collects is the reason why Google is valuable; if they start shipping that data off to third parties, suddenly the third parties don't need Google anymore.
(Same with Facebook-- they're not "selling" your data; they're selling the opportunity to target you based on your data, but the data itself is too valuable to Facebook to sell.)
> Google doesn't "spill their beans" to third parties
Except the government who gets unfettered access. Not into conspiracy theories but Im definitely not a fan of this (and it goes for all social media and technology companies)
No, its safe to say all major tech company's have have NSL's, and are required to allow the government to search and request data, it would be foolish to think otherwise. [1]
Yes, but this is all we know about now, every day there is a new revelation about what the government has access too that we previously thought they didn't. I think it's naive to assume they don't have access to all of it.
You still have not provided any evidence for you first claim that the government has total access to those companies. And your last link also doesn't say what you think it says.
It really is the big problem at the moment with the cutting edge of AI.
ML relies on large data-sets and if anyone tried to release a personal device it simply wouldn't even work, let alone compete with the mass surveillance google/ms/amazon are bringing to bear.
Unless the state-of-the-art in AI suddenly morphs, we seem to be stuck between giving up our privacy or having vaguely intelligent AI.
I personally fall heavily on the privacy side of stuff, but I can see the intellectual and commercial appeal of pretending it doesn't matter in order to get there.
This is simply a matter of AI not being advanced enough at the moment. Take how a human learns: it acquires experiences and information over a long childhood, and from this "data set" is able to do millisecond predictions of future events based off completely new data.
What needs to happen is a company needs to come along and create AI that is trained off of generalized information... some kind of socially accepted public data set... then the trained core is sold as a seed to individuals who then feed it their personal data.
It'll be the equivalent of buying an AI "teenager" and slowly training them to be an "adult".
I thing the fallacy here is that a single person can't create a large enough dataset, while I content they can. Combined with tools like pocketsphinx I think its very doable to have a privacy concious AI system.
As sensors become richer and the data becomes more valuable to the ML, consumers are becoming more aware of their privacy.
That means to get to a 'good seam' in the future instead of trawling through trash, you're going to have to convince millions of people their interests won't be affected.
That means in time there is an opportunity for a Google-killer with a different business model not based on using the raw data OR using by the use of intelligent agents. Google goes down because its stakeholders are contingent on getting to the raw data.
I think we just envisioned a highly anthropomorphized AI: essentially, a very smart and entirely obedient person to serve as the perfect aide. The Star Trek dream emerged before computing technology was very far advanced, and well before the idea of constantly mobile wireless communication. We thought our AIs would be small and physical, easy for a single person to entirely own and unable to remember more details than a human; instead we got unfamiliar algorithms run on machines far away.
> The Star Trek dream emerged before computing technology was very far advanced, and well before the idea of constantly mobile wireless communication. We thought our AIs would be small and physical, easy for a single person to entirely own and unable to remember more details than a human; instead we got unfamiliar algorithms run on machines far away.
Which is kind of funny, because even if it might be accessed by personal mobile devices, the Star Trek "library computer" AI was never "small and physical, easy for a single person to entirely own and unable to remember more details than a human", it was an aspect of a large server (or networked cluster, the actual architecture is somewhat vague) that was part of a capital ship or base, had access in the server/cluster to a library of very nearly all generally available knowledge and extensive personal information about both its users and about people with little direct connection (and could reach out across a galactic network to access additional remote information sources to handle requests).
"Unfamiliar algorithms run on machines far away" is much more like the source of the "Star Trek dream" than "small and physical, easy for a single person to entirely own and unable to remember more details than a human".
I think Star Trek was very prescient here as well: Security is terrible in the Star Trek universe. Nearly every antagonist is able to break into the computer within about a minute.
True! I suppose I was thinking of earlier space-age fantasy more than TNG. It was rather odd how easily people could look up details on crewmembers aboard the Enterprise, but I suppose the situation's different on an organized ship.
> It was rather odd how easily people could look up details on crewmembers aboard the Enterprise, but I suppose the situation's different on an organized ship.
Not just crewmembers: TNG showed some of the broader implications (both useful and creepy) of the convenience-oriented panopticon, e.g., when Geordi used the Enterprise library computer to construct a simulation from data (including personality profiles) of Dr. Leah Brahms, who later meets him (and encounters the simulation.) (S306 "Booby Trap", S416 "Galaxy's Child")
That was a substantial plot hole where she had no idea someone was simulating her.
I suspect a huge part of the panopticon culture would be / is being informed that you're being peeped at. 99% of the time someone asking "Computer locate commander Riker" involved commander Riker knowing all about who's looking for him and why and having a substantial conversation with the requester.
I don't recall any plot along the lines of Deanna getting jealous and spamming the computer all night asking where Riker is and he better not be in that cute ensign's bedroom... Because it seems logical the computer would inform him each time and he would eventually tire of the interruption and nature would take its course, WRT his relationship with Deanna.
A better analogy would be technically I could walk up to the company president's office and stalk him, but culturally that is so not going to fly and I would have a lot of explaining to do. Merely using the computer instead of walking there in flesh isn't a major cultural shift.
However, every single "holodeck creeper" plot line involved the simulated attractive real world woman not knowing she's being simulated until the plot reached maximum spaghetti spilling cringe, which is one of the few Trek panopticon situations where people being spied on did NOT know they were being spied on, which seems very un-trek, although it made for some entertaining stories.
An alternative interpretation is I believe over the course of the series every attractive woman on the ship was simulated on the holodeck at least once by at least one lonely guy, and its possible that culturally they just got used to it, although I find that unlikely. People do get conditioned to become used to the weirdest things, so its not out of the realm of possibility. Possibly a culture of what happens in vegas stays in vegas develops and its just the sexism of the TV show that they never showed the women turning the tables on their fellow male crewmen on the holodeck.
The fundamental issue comes down to one of trust and the real question is, do we trust Google to do the right ethical and moral things with our data that they are collecting en masse?
Siri recordings are uploaded to the cloud for transcription, and reviewed by third parties to improve transcription and the AI. It also can arbitrarily decide to query search engines to answer questions.
So, better, but nowhere near "on the device only".
When you use Siri the things you say will be recorded and sent to Apple to process your requests. Your device will also send Apple other information, such as your name and nickname; the names, nicknames, and relationships (e.g., “my dad”) found in your contacts, song names in your collection, the names of your photo albums, and the names of apps installed on your device (collectively, your “User Data”)
[...]
By using Siri, you agree and consent to Apple’s and its subsidiaries’ and agents’ transmission, collection, maintenance, processing, and use of this information, including your voice input and User Data
Until it started happening I always assumed it'd be powered by a central computer I had in a clean area of the attic or something. Not some DC somewhere.
"Knowing" in some sense everything about us is not the same as owning that information or trading in it. There are many other possible approaches to applying ML to our personal needs and data, so it is worth being careful about not conflating issues with a particular implementation and issues with the area as a whole.
> in order for it to do everything for us, it has to know everything about us, too
But does it? Does it have to know your birthday? (leave alone the fact that bdays are somehow part of a superkey for your identity).
Why should it know my residence, my spouse, or my CC# (with Apple's TouchID maybe it won't need to)?
Google's concept of AI is too creepy for me. It can be useful without being creepy. They're not even trying to make it less creepy.
Overlay on this the subtext that NSA and other tla's are monitoring all this (leave alone other countries). While I may trust Google, I don't trust them to not be forced to collude with the government.
Not to sound glib, but the idea of persistent data acquisition and aggregation has been pretty well known to be in the path for anyone seriously researching AGI or other human level AI systems.
I have to admit this is true -- I used to dream of a Star Trek like computer where you can just speak to it but I never imagined that such as system would be strife with privacy and security issues.
You could have realized that from watching Star Trek, as the computer in the Enterprise can always tell the captain where every crew member is, whether there are strangers on board, etc.
Bingo. The movie Her was awesome. But rewatch it. In every scene the AI does something cool, think what permissions it would need and what data it would have to have access to about you to accomplish the task. It gets scary pretty quickly.
So, at the risk of making myself ridiculous and branded a Luddite:
I've totally passed on the 'mobile revolution', I do have a cell phone but I use it to make calls and to be reachable.
This already leaks more data about me and my activities than I'm strictly speaking comfortable with.
So far this has not hindered me much, I know how to use a map, have a 'regular' navigation device for my car, read my email when I'm behind my computer and in general get through life just fine without having access 24x7 to email and the web. Maybe I spend a few more seconds planning my evening or a trip but on the whole I don't feel like I'm missing out on anything.
To have the 'snitch in my pocket' blab to google (or any other provider) about my every move feels like it just isn't worth it to me. Oh and my 'crappy dumb phone' gets 5 days of battery life to boot. I'll definitely miss it when it finally dies, I should probably stock up on a couple for the long term.
I'm not sure how much longer I'll continue with the mobile revolution. Pretty much everything I've seen so far that's being branded as AI and the future of mobile computing is just something that saves you from opening up an app. Instead of opening up google maps and searching for directions home, the directions now sometimes appears automatically. Instead of searching for a near-bye restaurant, one is displayed for you. I don't need to enter my flights in my calendar anymore. This isn't nearly as drastic of a change as the original innovations allowed by smartphones. I'm not sure It's worth the trade off anymore.
I'm even more Luddite than you are. I don't have a phone at all (landline or otherwise). You wanna reach me, you email me. The people in my life that care about me have come to accept this. For other things, I read paper maps, plan appointments ahead of time[1], memorise routes, and look up stuff online from my laptop when I find a place to sit down and wifi.
Reading stories like this makes me want to carry a personal tracking device even less.
---
[1] People tend to have fewer emergency reasons to cancel when they can't reach you 5 minutes before the appointment.
People always say that. What about the real emergencies? There's a lot of anxiety and dread that carrying a phone seems to alleviate.
I live in a big city. There's always a phone nearby, including landlines and payphones (they're still there precisely for emergency reasons). There are also passerbys who can help me. The risk of being all alone, having an urgent need to call 911, and be unable to do so is much too small to warrant carrying a phone around at all times.
Hate to be the bearer of bad news, but your "crappy dumb phone" is already telling someone about your every move with a degree of accuracy ranging from an area around the closest cell tower to within a few feet.
The courts are still deciding when/whether that information requires a warrant.
Yes, I'm well aware of that. That's how Dudayev got himself killed by a missile.
But short of anybody wanting to aim a missile at me I figure that I'm better off with the courts in my country where such information does require a warrant at present (and without any indication that this will change), and without the company controlling those assets trying to 'mine' my profile in order to advertise to me more efficiently.
Not really a Luddite, I'd think you're more of a Pragmatist when it comes to the Personal vs. Espoused benefits of certain devices or whatnot.
A close friend is a longtime professional software developer. Always interested in mobile. We used to have extensive discussions bout why I preferred carrying a small flip-top notepad and a pen vs a phone or tablet or whatever with a stylus (many have come and gone over the years). In the use-case scenarios I put forward (small lists, secure disposal, privacy, 'battery life') my little notebook frequently was the best approach for me. He disagreed, but that was the point of chatting about our views.
It is a question of how much new technology can add to your life, rather than how much old technology hinders you. Your old phone will function as before. It has the same benefits as before. The new stuff is not going to change that.
The big change is that the new stuff offers the ability to do things in a more efficient way. While it seems to offer very little benefit for individual tasks, some people will see a dramatic benefit while using it for the multitude of tasks that clutter their life. Other people will benefit simply because it enables them to do things that they would not have done before.
None of this is meant to dismiss your points. Personally, I find all of this data mining creepy even when I am confident that they are collecting the data for my benefit and that they won't use the data to my detriment when they are using it for their own benefit. Yet many people don't share that world view. Those people will benefit from Google's services, while nothing is being introduced to hinder the lives of those who don't use those services.
I feel like an old fool fighting against its time, but to me all those new applicances are scary not because of privacy (have my data, I couldn't care less), but because of how they shape our world.
Most of the coolest memories I have were the product of something spontaneous, or mistakes, that become close to impossible with a computer and internet in your pocket 24/7.
Assessing what's around you, talking to strangers, actively looking for something without it instantly popping in suggestions after you've typed 4 characters, all those things have been a great source of circumstance-based, little everyday life adventures.
This is the difference between risking buying a random book, or browsing reviews and picking a 5 star one to download.
This is the difference between discovering a place you'd never thought existed while waiting for someone and poking your nose around, instead of standing there, frantically watching their dot on the map get closer to you.
This is the difference between the mesmerizing feeling of playing the first expansions of world of warcraft, versus the tiring experience of the super streamlined versions that followed. Yes, they are less frustrating, but they don't bring tear to your eyes when you thing about them, they just feel averagely satisfying.
A few minutes ago I got up to open the door for my cat, and in a few minutes she'll be back and I'll be interrupted again. I feel like those interruptions are precious. They keep you connected to reality. I could install an RFID cat door, hell I could make a voice activated one in a couple weekends, and I would not be annoyed anymore. I would also never have seen all the things I witness every time I get to that damn door.
If the twilight zone taught me anything, it's that humanity will always have a rebel. If you make life so safe and easy that free will is no longer necessary someone will demand free will.
That reminds me of the TNG episode where two sides of a war long ago switched to simulating attacks. When there's a strike the casualties are required to report to termination "centers" to comply with the simulation.
For consumers this will be a choice between keeping their data private and having intelligent systems that perform better.
So far I haven't seen much, but based on my limited experience I believe customers are going to continue handing over their data to Google and Facebook in exchange for personalised services.
The truth is, the only times my smartphone has actually felt smart is when Google has been mining my information from various services (mainly Gmail and Calendar) and presented it to me at correct time, enhanced with other information they have gathered from web.
I don't think there will be any major backslash from consumers. The old comparison about boiling frog applies here.
There 50 million domestic workers in the world: living, breathing, naturally intelligence, autonomous human beings that people welcome into their homes. Not surprising, given that during almost the entirety of our evolutionary experience everyone one knew knew everything about one. The notion that people would object to a company knowing some things about one in order to place slightly more relevant ads is silly. It just doesn't seem that way in a message board echochamber because their are rewards for self righteous indignation, and not for making the commonplace observation that people willingly make tradeoffs without being victims of "false consciousness".
Almost everybody screens their cleaners and babysitters in some way. Either they are connected through well known friends and family, or for the rich they get checked out for criminality/dangerousness by professional services.
Except afaik, those workers aren't coordinated and reporting back to some central agency. If a housekeeper in New York happens to be snooping on their client, it doesn't mean one in Los Angeles is acting the same way. The damage is limited.
Sadly, I think you are right. I'm not sure what it would take to get more people to really care about privacy when they can have convenience instead. The Snowden leaks didn't do it in the US, even though it showed a government that was willing to break the rule of law to collect data on its citizens.
It's simple, regular people are going to get picked off by hackers, lawyers and other predators until much like the steadily rising smog around cities in the industrial revolution, we realize 'oh shit, this is an actual problem because the wind doesn't always blow this shit away'.
Meanwhile actual geeks and hackers will be fine, because we'll have used our intuitions about these things to choose privacy conscious alternatives to mainstream technology.
In addition to which it is increasingly the case that 'privacy' is regarded as an elite thing, and thus will ultimately be sought after by less educated classes. Like how green lawns used to be for the rich to show off that they didn't need to grow crops to survive and now everybody has them and doesn't know why.
Remember Hillary Clinton and the emails. Remember Colin Powell and 'why can't I use my pda in this highly secure area'. These people are the dinosaurs, and in the business world if you're not hack-resistant you're going to go bust.
tldr;
> I'm not sure what it would take to get more people to really care
Their interests get attacked or violated. That is what.
I have an open ended question-- mostly born out of ignorance; But why is this a bad thing? Isn't an artificial assistant that not only knows and understands us but anticipates our needs incredibly useful? In the process, sure they'll collect your info for better advertising, but short of Totalitarian Surveillance or Data Breach Concerns (The former is a bit of a reach if you live in the west, and they can survey you anyway if they really want to, the latter also seems somewhat unlikely)-- whats the issue here? Genuinely asking because I'm trying to understand.
Do you trust every single person at Google, and every single person at every third party company Google shares your data with, now and in perpetuity, to never abuse the data collected on you? Personally, my circle of trust is not that large.
Totalitarian Surveillance is here. In the west. Secure document releases aside, it's too easy to do to imagine a state actor not doing it.
Data breaches of differing severities occur every day, at nearly every company. I would have thought Yahoo was big enough and smart enough to avoid it; but no. Not Yahoo, not Sony, not security contractors, not credit bureaus, not Apple (a'la celebrity photo leaks), not Google (stories abound of individual GMail accounts being hacked).
>Do you trust every single person at Google, and every single person at every third party company Google shares your data with, now and in perpetuity, to never abuse the data collected on you? Personally, my circle of trust is not that large.
(Have worked at google in the past, may in the future, am not currently). You say this as though anyone at Google (or Microsoft or whatever) can go in and search for 'falcolas' and look through your GPS history.
I'm honestly not sure if there is a single individual at the company who had that power. I honestly think that the best thing Google could to is publicize their internal training and documents on personal information, because the regulations and such made me a lot more comfortable with giving Google the sort of amorphous entity my data, because no person is going to be looking at that data.
>, not Google (stories abound of individual GMail accounts being hacked).
One of these is not like the others, unless you're talking about something I'm not aware of. Hacking an individual GMail account requires guessing/taking someone's password, which is not an attack on Google's infrastructure (Unlike the yahoo, sony, apple, etc. examples), its an attack on a bad password.
How about the government? Isn't this exactly the access that Snowden (a contractor) had? And there are/were countless tales of people using the system to track ex-girlfriends/celebrities. Now imagine that not only do they have phone/email access, but every action the person takes in their home and potentially every single thing they say in their home (the microphone is always on).
In what way is this not exactly the nightmare scenario in 1984? You can argue you don't need to install this, but 10 years ago you didn't "need" a cellphone either. The risk is the consolidation of information and the potential for misuse/control. And not so much potential, but the inevitability.
Even if Google is perfectly secure from bad-actors today, they might not be tomorrow. And if they themselves suddenly switch to being a bad-actor, they aren't going to throw all that data away and start from scratch first.
> [...] which is not an attack on Google's infrastructure
This strikes me as a matter of semantics; does it really matter if I'm targeted whether they hacked my account or hacked Google?
> I'm honestly not sure if there is a single individual at the company who had that power.
Think harder. Who has the root access to the servers holding the data? Could the existing infrastructure and data segregation ever change? How many external checks and balances are in play that can't be manipulated by internal forces (i.e. is there anything stopping Google, or holding Google accountable if their data protection policies change)?
>This strikes me as a matter of semantics; does it really matter if I'm targeted whether they hacked my account or hacked Google?
I think is incredibly important. If your information is put at risk due to bad practices by Google/Yahoo/Apple/Facebook/whomever that's a problem to be taken up with the company. If you use insecure passwords and someone is able to access your information that way, then the problem is with your passwords, not with the platform.
>Think harder. Who has the root access to the servers holding the data?
As far as I'm aware, no one. Like I said, from my experience, accessing personal data and user information as an engineer required a lot of red tape and approval from 'the powers that be', and violating those rules would get you fired faster than anything else.
>Could the existing infrastructure and data segregation ever change? How many external checks and balances are in play that can't be manipulated by internal forces (i.e. is there anything stopping Google, or holding Google accountable if their data protection policies change)?
Here I agree with you, probably not (or very little). They obviously have public privacy policies, but you have no proof that they abide by those, and I don't know (and doubt that) they get audited or whatnot to make sure that those policies are followed. Which is why being an employee made me more comfortable. If nothing else, it meant I'd know ;)
I'm sorry but if you think that far ahead, then how do you do anything?
Do you go out in public? because if you do, some company could be recording you on CCTV, and the company that makes the CCTV equipment could sell the business to Google who could update it to use the CCTV footage in AI learning, which means that someone could eventually lookup your face and see you were at a smut store 6 years ago.
At some point you need to draw the line, there is no perfect privacy.
You are, of course, correct. Especially in this day and age, perfect privacy is nearly impossible.
That said, you can limit your exposure. Adding all of these Google implements creates a far greater surface to lose privacy through than not using all of these Google implements.
People routinely underestimate how much can be gleaned about your from correlating such "incidental" data. Thus I feel it's important to remind them of what it can cost them.
Is the benefit worth the cost? To some, yes. To me, no. And that's why I posted this, an explanation of why I don't find this level of information gathering and correlation by a private and profit driven company acceptable.
> Who has the root access to the servers holding the data?
I'd be surprised if such a thing existed in any large ‘cloud’ system. A data center machine is a small and fungible unit of computation and/or storage, and there's no reason for anyone to be able to log in to one.
I agree with you. To help convince people, I realize that we often imagine benevolent leadership, so it helps to give an example such as, "Imagine if you were a Muslim or illegal and Donald Trump were elected president. What could he do with your data?" E.g. find you, search your residence based on your purchasing and travel habits and send you home.
E.g. Wakes up at 5:30 am, travels to a construction site, lives in a house with a large number of people -> signals possible immigrant. Or this:
Detecting Islamic Calendar Effects on U.S. Meat Consumption: Is the Muslim Population Larger than Widely Assumed?
We have to think about data not just in terms of our relative safety, but in terms of what could happen in adverse circumstances. And not even just in terms of our own government, but foreign governments.
Sure, there are some trust issues, but just regarding your two first points:
A very limited number of Google employees have access to private user data (only when it's vital to their work) and they have strict policies in place (data does not leave the data centers etc.).
Which third parties are you referring to? As far as I know, Google does not give their users' private data to a third party.
Lots of reference to a user's private data - but what is private? Is my zipcode, gender, and birthdate private? Those three factors can be used to uniquely identify greater than 80% of the US population. Are the GPS locations I visit private? If so, why does information about them show up on lock screens?
Third parties get my voice recordings for "improving the voice recognition service" - what if my name is mentioned in the background of one of those recordings? What if I'm not a savvy user and add private data to those recordings?
You're also talking about what's in place today. If I give Google my data, that data is probably going to stay with Google as long as they are a business (and potentially after, if Google were ever liquidated and their assets sold off). What measures are in place to protect me then?
Yes, if data can be used to potentially locate somebody, like a combination of zipcode, birthdate and first name, it is considered PII (Personally identifiable information) and those strict policies would apply.
I'm responding to a comment that said trusting Google == trusting ALL Google employees, which is not true. Trusting Google with your data is believing that having some convenience (a mail service like Gmail, an intelligent assistant, etc.) is worth the risks you are talking about: Google drastically changing their policy, or being bankrupt and acquired by less scrupulous owners, etc.
Let's not just act like anybody at Google can look at your data and play with it, or a disgruntled employee will suddenly click a button and release all users' data on pastebin...
I think the strongest guarantee is that the sustainability of their business very much depends on that. Billions of incentives to make not a single ex-employee able to say "I managed to hack my way to user personal data".
That's not much of a guarantee. First, you're relying on everyone acting rationally. I hope they would, but humans often act irrationally, especially if grudges or money is involved.
More important is your assumption that the decision would even be made by Google. Outside forces such as governments may force Google's hand.
> able to say
It doesn't matter what is said. If Google had sufficient deniability (perhaps an NSL gag order? or a sufficiently high purchase price?), they can say user personal data is secure while sending it outside their control.
--
The only guarantee that would be believable is if they indemnified their users against any future damages derived from their data collection, and there is no way Google (or any company) would willingly accept that kind of liability.
> was talking about were about employees' (lack of) access to user data
Which we have to take their word on and hope that never changes in the future, even though Google might not be the party with the authority to make that decision. Even when they are, business plans change and a pile of potentially profitable user data is a very powerful temptation towards moral hazard. Only a fool would claim that this wasn't a risk.
> that's the case for any person and business.
Only if you deliberately ignore the entire point that the data shouldn't be stored at all by 3rd parties. A business that sold a real product (instead of a service masquerading as a product) would run locally and no data would be put at risk.
If a judge orders me personally to reveal something, they probably need a warrant and there is a process by which I can challenge that order. If, however, that data is stored on Google's servers then I don't have standing to challenge any interaction between Google and the government.
> Do you trust every single person at Google, and every single person at every third party company Google shares your data with, now and in perpetuity, to never abuse the data collected on you?
You forgot: every single state which Google is subject to.
Yup. If any of my data is stored on servers in, say Canada, what is to stop the Canadian government from siezing Google's servers in an effort to stop my maple syrup smuggling ring?
There are three levels of discomfort some people feel with this situation:
1. Concern that a single, third-party entity (Google, in this case) might peer into every aspect of our lives, and/or reverse-engineer an exhaustive catalog of our entire lives, by virtue of data collection.
2. Concern that many consumers will unwittingly opt into such control, unaware of the privacy they're relinquishing, and unable to make informed decisions about the possible applications and consequences of the tradeoff.
3. Concern that the custodian of all this personal data (Google) might use, sell, transmit, or turn over the data in ways we had not anticipated or believed we'd consented to.
Personally speaking, I understand these concerns but also understand the potential upside. I'm not 100% sure where I stand just yet. The aforementioned bullet points are presented without editorial comment; just trying my best to articulate what I believe to be the crux of people's concerns here.
The way you phrased the question sounds to me like "what are the shortcomings, leaving aside these terrible shortcomings?"
Having said that: Google is not in the business of making your life easier, but in the business of selling you ads. The data that Google collects about you is incredibly powerful, allowing them to go from a "simple" manipulation to sell you stuff you wouldn't otherwise buy, to full-scale blackmailing you if they see it fit (not saying that this is happening, but if they wanted to, who would stop them?).
It's putting too much power in the hands of a single, amoral entity (like all corporations). That's not good.
Not an ignorant question, but a reasonable one. I think the skeptics believe that society overall does not yet understand the tradeoff. And I agree with them there. I think on a deeper level, people are creeped out that data paints such a deterministic picture of our lives. Sure, I leave home the same time every day, and I leave work about the same time every day. That Google knows where I live and work without me ever explicitly saying shouldn't be a surprise, but not everyone enjoys seeing how their daily lives are so easily circumscribed, even with just passively-collected data.
I don't want a company that employs tens of thousands of people, along with a government in a foreign country, along with all the governments on the data route in between, and their employees, civil servants and assorted snoopers of all shades, to have access to the artificial assistant's communications and thoughts relating to me.
All these organisations are made out of people. People with power are inherently untrustworthy; they need enforcement mechanisms to be kept in line, and enforcement mechanisms need to be activated every now and then to stay in working order. That is, occasional abuses are required to keep abuse in line. The thin blue line wavers like a pendulum: it's how we know it's working.
Part of what I fear is that Totalitarian Surveillance is only a "bit of a reach in the west" because we put such a high value on privacy (and personal liberty) that we're willing to defend it. When that goes away then the "reach" will be far easier.
Sure it would be useful. Sell the assistant as a locally-installed app that guarantees personal data never leaves the LAN and will sell.
> sure they'll collect your info
Only if you let them. Demand better behavior from their software and business practices.
> Totalitarian Surveillance or Data Breach Concerns
What you seem to be missing is that the concern isn't about today's level of surveillance or today's data breach risk. Data generally persists indefinitely once it makes its way into a database or logfile.
To make a claim that these are low risk requires that at no time in the future will surveillance risk increase or data breaches become more common, ... or that the company will run into financial trouble and need to sell your data, ... or that a breach will be forced by a government (not necessarily your's or Google's), ... or that your data will be aggregated into other databases, increasing the "predictive" power and attack surface, ... or any of the other unknown ways your data could be used in the future.
Humans are already known to be terrible at assessing risk, especially when there is a very large separation between the cause and effect. Smoking today giving you cancer many years later is a traditional example. We already know data breaches happen, well meaning employees make mistakes or succumb to corruption, and external powers such as governments or organized crime occasionally take away your agency. Do you really want to claim that none of these risks will ever happen? Because that's the actual wager you're making when when you use Google's products.
For most of history, most humans lived under tyranny or domination if not outright slavery. It's only been the past few hundred years that this mostly stopped in some places.
Maybe we've turned a corner and will never go back to that. But I don't have confidence yet.
>Isn't an artificial assistant that not only knows and understands us but anticipates our needs incredibly useful?
No, not really. Restaurant recommendations and traffic reports are simply not that hard for me to find on Yelp or Waze myself. The "anticipation" here doesn't really help me in any material way.
Here's why I am afraid of Google. Google could have the best intentions, but its wife NSA that Google occasionally sleeps with doesn't. Everything you say to Google Home could possibly be recorded. Storage and Computing power for google is cheap. They can record everything you say in your home. Their algorithms can connect all sorts of information about you. If trump wants to create the next Muslim holocaust, Google and FB have the perfect information.
This is what Elon means when he says AI is like inviting the devil. We have this algorithm in our mushy brain. Its takes about 20 years to train and lives for about 80 years. Its communication bitrate is pretty low (mostly blabbering through mouth) and doesn't retain much information. Only patterns.
Now imagine this algorithm from the mushy brain is run on a silicon chip, with gigabit bitrate, retains almost everything indefinitely and can learn from entire history of humanity.
That algorithm would just need to deceive us until it was powerful enough to wipe us in one sweep.
Google already manipulates humans psychologically to click on their ads en-masse. Giving them more of your personal data is just feeding the devil.
The story of how Target discovers and targets recently pregnant women is a good example [1].
An interesting quote: “we found out that as long as a pregnant woman thinks she hasn’t been spied on, she’ll use the coupons. She just assumes that everyone else on her block got the same mailer for diapers and cribs. As long as we don’t spook her, it works.”
There's no reason to believe Google isn't doing the same thing. And I strongly suggest reading the original article[2], if only for the first two or three paragraphs.
"AI" is incredibly overhyped. Most of the features and applications I've seen can be relegated into the "that's neat" category, before they are turned off and never used again.
Google recently started telling me how heavy the traffic is on my commute because they've figured out I do it every day, and when I'm doing it. That's nice, but I don't care. I could already get that information from my car's GPS and seeing how red the roads were.
I wonder how much infrastructure, fancy pants machine learning and effort when it to just creating those useless alerts?
Google, as a problem, has already solved the problem they were created to solve: search the Internet. Now they need to find something for all those twiddling thumbs to do, so we get braindead features that tell me what I already know.
> That's nice, but I don't care. I could already get that information from my car's GPS and seeing how red the roads were.
I guess people have different experiences. Personally, I know how to get home from work, so I don't feel the need to turn on my GPS every time I drive home. So I appreciate getting notified when there's notable variances in drive times, without me having to look for it every day.
How often is it correct? I use Waze infrequently (mostly it's turned off in privacy) but it often gets things wrong (it's better than Apple's maps or Google Maps).
I need to get home on time to pickup kids, but mostly just leave a bit early...
I agree with you both. "turning on GPS" could just mean that while driving one has the view of the roads on to see which one's have traffic, not necessarily getting turn by turn directions to and from work every day.
Imagine you're a something like a muslim in the US, and somoene like Trump is elected 5 years from now. You've been here all your life, you have a job, you pay your taxes, you're just a person who happens to be in the wrong place at the wrong time. Much like a jewish person in Poland in the 1940's. Now even back then, it was not easy to escape persecution... but it was possible. In a google world though, there's nothing preventing a corrupt government, or even a corrupt corporate governence from take over, or leveraging this data to your disadvantage. Perhaps your car recognizes you, and locks you in until police come, perhaps you felt safe enough to go to a bar, and that data was forwarded.
Perhaps an exaggeration, the point is, even if you trust google today. There's no guarantee that data will always be held by the people who are google today. We know for a fact the NSA had access to all google data up until at least the snowdon leaks. To me that's the concern about privacy, you have no idea how it can be used AGAINST you in the future.
Everytime someone responds with "I have nothing to hide" I replied along the same lines as you. But I thought about it a bit more and I think the data they have on you is only a small problem when you look at how technology progressed since WWII. Imagine how much easier it would have been today for the SS to extract jews from their hiding places. We have satellites, helicopters with IR camera's, hell, even radars that see through walls.
Bad actors exist. Hence why with every new technology comes new responsibility to use it properly - from nuclear fission to robotics to neural networks. That doesn't mean we need to reference how WW2 Nazis would use it every time a new technology is developed.
"What would the Nazis do with this?" is not wholly inappropriate. It's basically a way of looking at the worst case failure mode, like asking "what would happen to this nuclear reactor if all cooling and backup systems fail at once?" or "what happens to people in the capsule if the rocket explodes during max-Q?"
In this case it's "what's the danger of all this lazily deployed insecure ubiquitous surveillance gear in a political worst case scenario like a descent into totalitarianism, mafia statism, etc.?" That's not an unlikely thing. Complex societies undergo bouts of collective insanity or descents into pervasive corruption with disturbing regularity on historical time scales.
Personally I think the USA is one 9/11 scale (or worse) terrorist attack or one seriously painful economic crash away from an American Putin or Chavez (or worse). Which we get depends on which side manages to field the most charismatic demagogue. If that happens all this total surveillance stuff will be mobilized against dissenters on an industrial scale and with a significant amount of public support.
You limit things like surveillance to limit moral hazard. Future generations are likely to look back on the wanton deployment of all this stuff and say "what were they thinking!??!?"
>You limit things like surveillance to limit moral hazard.
Not quite sure how you got that out of "with every new technology comes new responsibility". That's neither singling out surveillance nor limiting to moral hazard.
From the article-
"In other words, your daily business is Google’s business."
From Google-
"Google's mission is to organize the world's information and make it universally accessible and useful."
One thing that drives me mad about Google is how they say "the world's information", then ignore 99.9% of the worlds information, and then expect their consumers to give them a pass and not call them to account for how they privatize user information.
Looking at the information that Google organizes and makes accessible and useful I don't see things like "species extinction", "oceanic water temperature history", or say "dolphin linguistic data", equally represented when compared to "my browsing history", "my location history", "my search history", "an archive of my voice searches", "when I leave or return home via Nest", "who I associate with via Google's communication suite". Google is organizing exactly that data which Google can monetize, which is not the world's data. Not a lot of people want to buy data on deforestation so it's much more difficult to get Google to put resources into that. How many people chew pieces of gum until 100% of the flavor is gone? I'll never know, and Google isn't going to help me, because it isn't a profitable data set.
Simply stated, Google needs to stop acting benevolent and start fessing up to attempting to be omniscient in order to be all knowing about its users, not "the world's data".
First off let me put this out there, people that send me "lmgtfy" links annoy the f#ck-sh#t out of me. I'm proficient at utilizing search engines, thanks pal.
More directly to the point, I was (clearly) comparing the relative resources Google invests in some data sets vs others. Are you arguing that Google invests comparable resources in this type of data compared to the resources it invests in understanding Google's users' data sets?
Google Scholar, Google Books, and Google Earth data for researchers existed before the Google Assistant was even an idea.
Not sure what your point is, either: do you want to get a notification in the morning saying "try to leave early today, as an accident has caused increased traffic," along with another one saying "remember to save to buy an electric car"?
Not talking directly to the awesomeness of each of their individual products, which I would be the first to admit, they have many. And not speaking directly to their Assistant product. Before Google Assistant there indeed were those other products, which, I honestly don't get why you chose these ones, they aren't exactly great counter arguments. My central point was, Google puts it's resources best where there is the best return. I did not say, Google never created anything which makes non monetized information more freely available. With this being Google's central philosophy and guiding corporate light, it's a hard sell to continue to paint it as a benefactor to all of humanity.
Gmail was initially a product started by one guy at Google, and was not a project born out of Google corporate philosophy or business strategy. --> https://en.wikipedia.org/wiki/History_of_Gmail
Re: Google Earth... This one is fully being leveraged for monetization, especially with mobile's commercial possibilities finally being realized. From Wikipedia: "Google Earth is a virtual globe, map and geographical information program that was originally called EarthViewer 3D created by Keyhole, Inc, a Central Intelligence Agency (CIA) funded company acquired by Google in 2004 (see In-Q-Tel)."
Again, what's your point? That it's bad that Google tries to make a business out of organizing the world's information? It's the only way to keep doing it. You say:
> One thing that drives me mad about Google is how they say "the world's information", then ignore 99.9% of the worlds information
One wonders what's that 99.9% that you miss. You mention:
> I don't see things like "species extinction", "oceanic water temperature history", or say "dolphin linguistic data", equally represented
What equal representation do you want? A notification when you arrive at home telling you "this is some new discovery on dolphin linguistics"? For what it's worth, even that I'd bet you can get, by letting Google Now know of your interest in the topic, or subscribing to a science news channel in YouTube.
> How many people chew pieces of gum until 100% of the flavor is gone? I'll never know, and Google isn't going to help me, because it isn't a profitable data set.
Is it even known? Google's certainly not going to do the research; research isn't organizing. Would such an investigation even get funding from anyone, to pay the researcher? But supposing it's done, and it's published in some paper or some book, what's your best chance at finding it? Google Search, Scholar, or Books.
> I was (clearly) comparing the relative resources Google invests in some data sets vs others. Are you arguing that Google invests comparable resources in this type of data compared to the resources it invests in understanding Google's users' data sets?
> Before Google Assistant there indeed were those other products, which, I honestly don't get why you chose these ones, they aren't exactly great counter arguments.
Because before Google was investing a dollar in any of:
> "my browsing history", "my location history", "my search history", "an archive of my voice searches", "when I leave or return home via Nest", "who I associate with via Google's communication suite"
it was already investing plenty of resources in those products I mentioned.
I'm confused as to why in this thread there's very little contrastive commentary on the different stance taken by Apple and Google about privacy.
Apple has made preserving user privacy a paramount goal, investing in research and technology to achieve it with minimal loss (however much it is) of (intelligent) functionality.
I find that a very strong point for the Cupertino based company.
Well, the core of the different stance is just marketing.
Both Apple and Google comply with federal warrants, etc... That's obvious.
Neither of them have any intention of ever letting a 3rd party access user data, that should hopefully also be obvious. As in, neither company sells your information. It'd be a PR disaster, and in the case of Google it would be a massive loss of revenue as it would undermine their entire ad business.
So the only difference is what data they actually have access to, and it's not actually that different. The big difference is iMessage has end-to-end encryption by default so long as you're talking to another iMessage user. That's sorta it, though, and that gets largely neutered by the fact that the messages are then immediately backed up on iCloud anyway and that end-to-end encryption is lost in the process (otherwise you couldn't restore to a new device). Google now offers that, too, via allo's off-the-record, though. Everything else is pretty much the same between Apple & Google with regards to meaningful privacy.
Apple wants (or wanted?) control for themselves. DRM everything; control the products after sale; wall the garden. They became the thing they hated back in 1984.
After Snowden made Apples collaboration with the government economically untenable, Apple may now be willing to let users have control. They fought the FBI to protect privacy on a transparently political charade. They've built hardware and software key protection into the iPhone.
But this is a change, and maybe it's a lie, or maybe it will change back, or maybe they just won't succeed. I'm not rushing out to buy a Mac Book.
I am sorry, but you still believe apple is safe? I mean id love to believe in that there still is some part of tech left, that will not be the reason for reprisals.
Just one data point: After owning 4 nexus phones and being an 'android fanboy' I just bought an iPhone 7. I've decided that while still not that great, Apple's take on user privacy is much closer aligned with my own. I doubt many other people will make purchasing decisions based on that, but I still think Apple will be 'right' in the long run. I think that the resulting applications from all this data collection will never make it past the gimmicky stage for a lot of people. The general public may not care that much about privacy, but they also don't care that much about nearly everything google displayed. I think apple knows this and it's one of the reasons they can take their stance on privacy.
The idea behind so called checks and balances in the political arena is now needed in the tech arena - more specifically the megacorp arena.
People say, competition will ultimately take care of it. Yet, there really isn't a serious competitor for Google's search engine. And don't even get me started about social networking with respect to your private lives, where the only player is FB as far as I can see.
People say they don't want the government involved, and often for good reason. But if there is no expectation that these tech giants will self-police when it comes to privacy, and people don't want these organizations to be policed by the government either, then how exactly does this play out? How far is too far before we start demanding more respect for our rights from these organizations?
Another thing to think about: when dealing with tangible goods, the creative destruction of capitalism is somewhat reasonable to justify because it is usually easy to see. How does it work with information? Suppose FB just completely blew it for a few quarters in a row, and starts tottering towards its demise, what happens to the "defensible barrier" called data? Does it belong to FB to do as it sees fit, like the assets of a company about to be liquidated? Or is FB going to "return" it to the people from whom it got it? If some other company now got possession of its assets, including data, what is the expectation around what are reasonable uses for such info? Or, is FB, with its trove of data about every single person who has held government office, now just too big to fail?
And all this can be asked just of the data that FB collects from you directly by asking you to fill it in. What about the stuff that it "infers" behind the scenes? What about the "connections" it adds to its social graph without your permission in order to provide a "local marketplace" which apparently gets rid of the "private information" challenge? [1] Not that Google is any better in this regards, of course.
I think the time has come for some serious thinking about checks and balances in the privacy arena.
This is literally the perfect end-game for an advertising company: total awareness of need under the guise of 'optimization' or 'AI enhancement'. They can see what your'e searching for, where you're going, when you run out of Mayo in your GoogleFridgeAppAssistant. What better way to offer ads than EVERY time you have a want? It's an advertising utopia!
Is the market really so bad that Google needs to invade people's privacy to this extent in order to grow?
I bet Google's CEO will not use the products himself. Google is almost behaving like a pusher, promising people comfort at the expense of their livelihood (the chilling effect).
Perhaps this should simply be illegal. If people want a personalized AI assistant, why not train the AI on the user's device? I seriously doubt that it has to know everything about everybody's behavior in order to know some things about the user's behavior.
You really have to wonder who truly needs / wants AI in their lives? It's really just being pushed on. Google should be careful not to make themselves irrelevant.
I've been experimenting spending less time with my devices and it's hard because I'm addicted, but life is more fun when it's being lived and not having to even think about technology, leaving devices of all kinds at home and just sitting in a park is a real luxury.
You just answered your own question. A truly awesome AI assistant would be as minimally present as you prefer. If all you do is talk to AI you don't need to interact with a tablet screen, for instance. If you figure out how to make people productive with that then you may have an idea of the next big thing.
Yes but the way it seems to have panned out is we need to constantly feed the machine?
Thing is, I just don't need AI for everyday things. AI to help solve big engineering, medical problems, great, but to help me schedule my life, not really.
Even when travelling, things like Google maps and translte just isolate and distract me in someways. Asking a local about something is really helpful and you can get more out of the intraction than just directions. It's rare language truly is a barrier I find.
I'm even questioning how much I really need a smart phone, it is mostly just a distraction. I remember actually being socially pressured into owning one when I was 18. Never actually stopped and asked myself if I need one, it's just some thing has become a "must have".
I know what you're saying about AI being out of the way though, and it would be excellent, if however; it was completely my data, it worked for me and not for a third party. For example, kept my data private for me, while paying my bills, that might be nice. But basically it would work to keep me focused on the real world, which could be achieved without tech ?
I might sound a by anti-progress here , but It would be nice to just see the right progress.
Well, for a single data point, myself, most of my family, and most of my friends want AI in their lives.
Hell that's the whole purpose behind devices like Google home and Amazon's Echo. People aren't buying them because they look nice, they are buying them because they are a simple voice-operated AI that can answer question and do things for them.
You might not want or like it, and that's okay, but don't act like nobody is asking for this. People have been begging for it since computers were a thing.
>What do you do with it? Like what is a typical use case / interaction ? Do you ask it to put on a movie or tell you the time ?
Control the lights in the house, make sure the garage door and front door are closed and locked at night, adjust the temperature, play some music (specific band, song, or genre) on one of the TVs or on my computer, read me things like the weather, calendar entries, emails, ask it how long it'll take me to get to work right now, ask it about conversions or math-ey things I need done, have it make notes (specifically notes that can alert me when I get to work, or the next time i'm at a supermarket, etc...), set alarms, create calendar entries, send emails/instant messages (almost always short and sweet, but still useful). Have it lookup "knowledge graph" kinds of things like what time a store closes at, when does a music album go on sale, what kind of reviews did [movie] get, when does [movie] come out.
And that's just the stuff I've used it for in the last month or so.
On the phone, it can do even more:
Have it navigate me places, and ask it how long until the next turn while on a motorcycle (through a bluetooth setup in the helmet). It will alert me when I need to leave for work using my usual route, it can infer where I am going when i'm not navigating there, and can alert me about traffic incidents on the route, and suggest an alternate route (this one is fucking cool when it happens). It gives me severe weather alerts for my location, notifies me of things like price drops or new releases of things i'm interested in, and shows me almost an "rss feed" kind of thing for news articles that I'm probably interested in (this one is hit or miss, but i'd say every time I look at least one of them is something I wanted to know about). Just today it gave me a notification that I told it to remember. What I said was "remind me to call my doctor tomorrow afternoon", and about an hour ago it put a notification on my phone saying "call my doctor" with a "call" button on it. When clicking the call button, it started dialing my doctor's office. That's what I want from an AI, and it's working great so far.
>I totally disagree people wanted all their personal information fed to private corporations in the "cloud" BTW. Completely disagree. It's Orwellian.
Well that's a strawman... It's like saying that "people wouldn't want to hand over hundreds of thousands of dollars to get some wood and cement" when talking about buying a house.
People aren't dumb, they know these devices aren't magic. They know that if you ask what the weather is, obviously the device needs to know your location. If you ask it to play some music you like, it obviously needs to know your preferences. If you ask it how long it'll take to drive to work, it needs to know where you work. In most cases people don't want to spend hours on hours setting up every little setting to tell the system all this information, they just want it to work, so it just works. It infers information, it remembers preferences, it figures out connections that you didn't even know where there. And in return you get a wonderful device that can help you in your life. If you don't want that, it's fine. You can not use it, you can have google delete all information associated with you, and you can disable all tracking and gathering.
But let's not pretend that people don't want the outcome that handing over information can provide. They want the AI, and in order to do that, the AI needs to know them. These devices are being sold as being able to learn about you the fastest, and use it the most, it's not like they are being shady here.
In my experience lot of aging population feels that it's desirable to have such devices, that these could give them assistance to memorize tasks or do them automatically (e.g. closing garage door). There is many positives if it's implemented with ethical considerations.
> People aren't dumb, they know these devices aren't magic.
Yes they know that they aren't magic but for the most part they don't know how they work - so it's essentially black boxes with deceiving trade-offs, also almost no one reads TOS.
I haven't really noticed any one demographic using it more than others, most people tend to "get it" pretty quickly. Unless you are perfect, most of us need somewhere to jot down notes, or reminders to do things. Hell, there are times where my wife and I will be laying in bed and she will ask "did i forget to close the garage door?", and I can just ask the echo to close the garage door and lock the front door just to make sure.
>it's essentially black boxes with deceiving trade-offs, also almost no one reads TOS.
See, people keep telling me that they are "deceiving", but I just fail to see how. Nobody is saying that these work without your personal information. Nobody is saying that they aren't using your history to "teach" the service. Nobody is hiding the fact that they learn your preferences over time to get better. Why do you think they are deceiving?
To me it's quite the opposite (as this article shows). People are asking for more learning, more automation, more "AI", and the companies are putting out headlines like "Our AI can learn about you and your wants and needs FASTER than our competitors can!"
Your giving up lot of personal information and potential freedom of choice to have "AI" (not sure that's the right word for it anyway), do a bunch of these tasks just for, what you perceive as convenience.
For example, having content fed to you is potentially unhealthy, "Google, read me today news", are you telling me you just want to be fed any kind of information based on some kind of "preferences"?
As you said it's a choice but don't pretend people totally know what's being done with the data.
I hope there is age limit restrictions placed on this kind of thing.
> Your giving up lot of personal information and potential freedom of choice to have "AI" (not sure that's the right word for it anyway), do a bunch of these tasks just for, what you perceive as convenience.
Come on now, you can't handwave away what to me are very real benefits as "perceived convenience", and just because it seems like a lot of personal information to you doesn't mean it is for me.
Yes, i'm letting them see a lot of personal information, but that's not a bad thing. I get tangible benefits from it (not just this "AI", but many many other services), and I'm actually asking for more. Right now it only learns my music preferences from when i play stuff with Google Music, i'd love to feed my soundcloud history into it to give me a more well-rounded set of preferences. I'd also like to feed my netflix watch history into it to let them give me better tv/movie recommendations. This isn't a "mistake" by me, this is a conscious decision I am making to improve my life by giving them more information, just like how it's not a "mistake" that someone pays money for a service they want/need (even if you personally don't want or need that service).
Also, i'm not so sure that "AI" is the right thing to call it either, but it's the term that was chosen, so it's what i'll call it. I read somewhere once that "AI" stops being "AI" when we understand it, and just starts being "programming" at that point, and it makes a lot of sense. As we get better at making programs that feel "natural", it's less "magical AI that can do anything" and more "well understood programming techniques".
>For example, having content fed to you is potentially unhealthy, "Google, read me today news", are you telling me you just want to be fed any kind of information based on some kind of "preferences"?
Come on now, are we going to have an actual discussion here or are you just going to build up strawmen to kick down? First off, it's not my only source of news. I'm not having them "feed" me anything. Second, it's more of reading headlines that i might be interested in. For example, this morning it showed me 5 headlines:
* A new XKCD comic is out
* "The Macbook pro 2016 October release date confirmed" from the University Herald
* A story from TechCrunch saying that the Boeing CEO says he's gonna beat SpaceX at something
* An article from PCWorld titled "Happy 25th once again to Linux"
* And (funnily enough) this story from TechCrunch titled "Not OK, Google"
I'm not being "brainwashed" here, i'm not letting google determine what i'm interested in, i'm not taking anything there blindly at face value, it's just a list of headlines that I can either click to view the article, or lookup at my own will (or in many cases lookup on HN or Reddit for some discussion about it). Every one of those i'm interested in in some fashion. I personally find it funny that you think it's unhealthy to have an "AI" "feed" you information, while most traditional news networks are much more of a "feed", but they don't tailor to any kind of personal preferences (what fox news decides to air, is what fox news watchers are going to watch). That to me is much more dangerous! At least in this case I can tell the system that I don't like this story (because it's blogspam, or it's incorrect, or it's just done in bad taste), and to not show me stories like this again.
>As you said it's a choice but don't pretend people totally know what's being done with the data.
No, and I don't pretend to know what is being done with the data, that's the point. I give them that data, and they do what they want with it, and in return I get all of the benefits I get. There's nothing stopping them from selling it, there's nothing stopping them from releasing it to the public, there's nothing stopping them from looking through it personally to find "bad" things. But I have "faith" (if you can call it that) that they won't. Because if they do, i'm done with them. And a lot more people would be as well.
>I hope there is age limit restrictions placed on this kind of thing.
There is, as with most things online it's "under 13 needs adult supervision". Funnily enough I've read that toddlers LOVE these things. It's much easier for a child to tell the TV to play Thomas the Tank Engine than it is for them to fumble around with a remote, or have access to a phone. It's actually becoming a really good way to let little kids be involved in computers and technology at a younger age, which I believe will be a major benefit in their lives (the jury is still out on that though).
I guess, reading through these comments, that I'm the only one who wants this future? Yes, sign me up, google. I'll give you more of my information, if you can take it. Can I wear an implant that measures my heart rate, body fat content, blood pressure, glucose level, and brain activity as well? Because as soon as I can I will be the first in line for it.
What some of you don't seem to realize, (and this happens in EVERY SINGLE ONE of these threads) is that:
1) AI is not magic. Yes, we call it "AI", but you use words like "know" as if there is a conscious entity that "knows" something about you. The AI doesn't "know" anything. It's a computer.
2) Yes, actually you can opt out if you want to. Get a flip phone, don't use google services, use an adblocker, block javascripts that you don't like, don't send emails to gmail addresses, etc. Just don't use their services if you don't want them. Yeah, this might be harder. It might feel like you are living in the 1990s/1980s, but it sounds like that is what some of you want.
I, however, want a future where an AI can tell me things like "Flights to Shenzhen are really cheap right now, and you have the discretionary income to afford a trip there. Here is a possible itinerary for you based on the types of things I know you are interested in. You could leave this Saturday and there is nothing on your calendar that you need to be at for the week."
Or
"I noticed that you have been bicycling a lot lately, and based on the patterns of where you go, I think that the following bike trail would be interesting to you. The route is loaded up on your phone already."
The other thing: google is an advertising company. Yes, because I know this, I am able to take this into account when listening to google's suggestions. But here's the thing: I like being [well] advertised to. I have discretionary income, that is WHY I HAVE A JOB. I am going to spend that money on things. If there is an AI that is helping me find the perfect nexus of things I want and things that I can afford, that is a GOOD thing. That is helping me more efficiently spend the money that I got.
Yes this stuff is subtle. Yes this stuff is pervasive. No we don't need yet another "2edgyforme" "if you aren't the customer you're the PRODUCT" articles about google.
I think there are two problems with this suite of crap from Google: the privacy issues and the fact that Google is putting corporate objectives ahead of creating useful things.
It's clear Google wants to "own the home" and all their products were built to further this goal (rather than be useful themselves). This is why Google bought Nest for 12 jillion dollars. And it's why the iWatch failed and Google Glass failed - right now, these are niche products that barely have purpose.
Now this stuff may become integral to our lives, as depicted in so many sci-fi stories, but if they become embedded in our lives and are wholly owned by one huge company, that should be terrifying to everyone.
Here are some real world reasons why: a virus is installed on your Google box through your wifi - now house robbers know everything about your schedule and habits. Your parent goes through your every personal action to make sure you aren't getting in trouble. A spouse uses the system to track your every movement and make sure you aren't cheating. And of course, the gov't has access to all of this data by default. Imagine being a famous celebrity with every action in your house known and accessible to any gov't peon with access and a bit of curiousity. This isn't some conspiracy theory, this is exactly the access Snowden had (and he was a contractor).
It isn't what these products are, it's the direction they represent: complete surveillance of every personal action, stored and owned by one monolithic corporation and the government. And not only is this is sort of where we are heading, it's Google's clearly stated objective.
It reminds me of the 50s when plastics were going to revolutionize everything... which they did, but we melted off the ozone layer before realizing the consequences of slapping new technology across the world. Especially when the benefits are so minimal and the threats are so real - imagine McCarthy with the type of access and control these devices would provide if Google succeeds in pushing this across 80% of homes.
When IR remotes hit college campuses, the game was to shut off someone else's TV through an open door. There's even a one button remote from that era that shuts off any TV in sight [0]. Voice control is like IR on steroids.
Guest at house party: "Ok google, show naked pictures of [host's ex-girlfriend]"
It happens. I've been rickrolled by an acquaintance yelling "Alexa play never gonna give you up by rick astley" thru a screen window. Its gonna be a widespread cultural thing once Hollywood inevitably uses it in a movie or TV show as a joke. I don't watch that stuff, maybe that's where he got the idea it would be "funny" to team up with Alexa to rickroll me.
I find privacy anxiety to be much like electric car range anxiety. Once you have the product its not an issue, it drops to zero, but debate on the internet is extremely hot and heavy right before widespread adoption kicks off. Enormous amounts of toxic anxiety and paranoia bleeding out all over stuff that in practice after deployment just doesn't matter. In other online venues I've been worried about causing heart attacks by suggesting my next car will be electric, and this topic is about the same here.
Does anyone else find the tone of this article off-putting? I mean, I agree with the author, but the presentation feels like fear-mongering. Maybe this is what we need to get people to pay attention to the details, but I instinctively mistrust things I perceive as trying to appeal to by fear at a base level, and this triggers that fairly heavily.
I have very conflicted feelings about this article.
It reads very heavy on the tinfoil-hattery. Particularly with things like Google Home's mute button. The single advertised feature of the device is that it's always listening and the author is somehow shocked or confused by the fact that you have to push a button to have it not listen? So much so that they thought it was worth cramming in some stupid sounding snide remark about "true intentions"? That tone immediately undermines the entire rest of the piece.
Many readers are skeptical about the usefulness of personal AI assistants. This reminds me of what Jeff Bezos said about disruptive technologies [1], which I think resonates well among many tech company executives. You (they) need to be willing to be doubted for a very long time.
Any time you do something big, that’s disruptive — Kindle, AWS — there will be critics. And there will be at least two kinds of critics. There will be well-meaning critics who genuinely misunderstand what you are doing or genuinely have a different opinion. And there will be the self-interested critics that have a vested interest in not liking what you are doing and they will have reason to misunderstand. And you have to be willing to ignore both types of critics. You listen to them, because you want to see, always testing, is it possible they are right?
The reaction to this seems awkwardly negative when contrasted with the praise that gushed for Amazon's Alexa products. I am having a hard time tracking why folks seem to feel so differently about google and Amazon having similar access to personal information.
People have a very different perception of their relationship with Amazon (and, similarly, Apple) and their relationship with Google.
Amazon sells you stuff in exchange for money.* This is a clearly understood type of transaction that people understand.
Google gives you stuff in exchange for being able to "sell your eyeballs" to third parties. And, most people actually believe that Google sells your data to those third parties, even though that's not the case, which gets at the fact that this model is not as well-understood.
As far as hardware design:
Echo provides a clear and prominently-placed button (right on top), with an LED light indicating when it's muting the microphone; when the button is active, the entire indicator ring also lights up. This button has equal prominence with the button that can be used to manually activate Alexa. The existence of this button was highlighted when Amazon introduced Echo.
Google Home places the button on the back side of the device, when there is clearly a "front," as defined by the tilt of the top surface. The existence of this button was not highlighted in the introduction at I/O.
* Yes, Amazon also runs an advertising network. Most people don't really know this. And it's a very, very small part of their business.
Amazon sells you things. Google sells your information indirectly to other people who want to sell you things. There's a lot more cultural acceptance around the first business model because more people interact directly with those who do it.
I'm not worried what the engineers who built this will know about me. I'm not worried that Google will centralize this data so if someone hacked it and could drill through the data they could find me.
I'm worried about who Google wants to sell this information to and what they want to do with it. I'm worried about Google working with intelligence agencies to try and target me politically, feed me propaganda, or put me on some list of undesirables.
We can have an ultra-smart AI that does everything for me without worrying about these things. I don't want to pay with my personal information, I want to pay with money. I want Google to stay out of my life.
I am more worried about subpoena on steroids than individual hackers. Big budget crackers are a different thing.
> I'm worried about Google working with intelligence agencies to try and target me politically, feed me propaganda, or put me on some list of undesirables.
If 100 people do A after doing B and you are the 101st who does not want to do A, is it your fault or that of the algorithm? This is no "artificial intelligence" it is a fitness, a mutation and an evolution function, without any true randomness or "chaos" prediction (predicting the future...), but with added Advertising hidden between your reccomendations. Like they can not build flying cars so they reinvent the hovercraft. Aimed at the Internet of things, which will be the biggest tech bubble known to mankind. Sorry i have to vent that somewhere, but all my communication platforms have already been shut down.
I am not trying to inject my opinion. I am looking for genuine discussion. The idea of privacy has always been kind of vague to me. Where do we draw the line of acceptable information sharing and not? If a company is collecting data on our behavior and changing their practices accordingly to maximize their profits, is that immoral? Isn't that essentially A/B testing? The practice of using that data in a potentially manipulative way might be immoral, but is the best way to prevent that really to completely not share anything?
There is no line saying how much is ok. "Privacy" as a concept is the embodiment of personal choice. And everyone may choose differently, in different situations. The only "line" there is, is about whether that choice is voluntary and informed.
It wont be used for A/B testing only. Governments tend to get data and oppress people based on it. Where it comes from rarely matters on that level. You may live in a country that right now doesn't do it, but historically, and currently in other parts of the world, it's a very ongoing theme.
So you can not say 'world hunger doesn't exist because I just had a large sandwich', the issue is more global.
We are just not that evolved yet, and if near 100% aren't where you expect them to be, then you can't assume anything else.
Television was ok, too. I used to watch. All the time. TV was the glue that kept us together. Now it's the acid that tears us apart. I no longer use a television.
Google. I love your maps. Your directions. Your free storage. And my earning a living never requires to use you, Google. Just like my TV.
I do expect Google to become something that I no longer desire. Just like the TV. And I think Google won't be able to control or predict it either. Just like tv.
Prerequisite to personal AI worth a damn is data about you worth a damn.
It'd seem windows 10 is setting Microsoft up for this. Google is following suite with its own hardware.
Central task is to infer what you want and help you achieve it, but further, your AI can ask you questions too to work out all sorts of things subtely.
i think eventually, we'll think of I personal information as a commodity or "raw material" and regulate its extraction and trade as such.
My problem isn't with the AI having my data; it's with the company that created the AI having my data. That might be necessary to make an AI assistant practical using current technology, but it's not an inherent requirement.
Arguably, at a scale, it is cheaper for every separate device to process it's own voice-to-text and automate things.
But, doing it offline, has no incentive for advertising companies.
Universities / home automation companies / etc would be probably be more incentivized to develop that.
So, any advances in AI, if they come in the 'always online' will come with strings attached. It is not your AI optimization software, it's somebody's else.
Where is the border between inferring what I want and deciding (for me) what I want? How about an artificially created tilt toward certain consumer or political brands in the process of inferring what I want?
>i think eventually, we'll think of I personal information as a commodity or "raw material" and regulate its extraction and trade as such.
I don't think there is eventuality. We must fight hard now to change our society. We live in a era that determines how future will look like. Politics is path dependent and wrong choices now can have consequences that last centuries.
Economic information asymmetry[1] over consumers and competition benefits few large corporations that can leverage it over customers and competition. They will fight for tooth and nail to prevent consumer regulation. Governments have additional agendas.
It's not just a matter of personal preference. It's well known that mass surveillance is a powerful tool of oppression.
Oppression is not a theoretical idea and not only an historical problem: Government mass-murder in the Philippines, the oppression of Muslims in Europe, of a large religious group in Turkey, of Tartars by Russia (in Ukraine's Crimean province), of so many people in Syria, of populations in all the oppressive countries in the world. The U.S. election could result in oppression of Muslims, Latinos, blacks and others; some U.S. cities already use 'predictive policing' to identify and harass private citizens - what will happen if Muslims becomes an open target? And don't forget anyone who has any interaction with Muslims. Such things have been going on since the dawn of humanity and unfortunately will continue.
The idea that Google and other commercial mass surveillance will not be used for these purposes is a dangerous, irresponsible fantasy; it's lazy, head-in-the-sand thinking, akin to climate change denial: We haven't died yet is the only argument. These systems are not and will not be kept out of government hands: Government already has broad access, as is well known (National Security Letters, NSA spying, Yahoo's recent revelation, etc.). Laws can be made at any time giving government more access, and they will in climate of oppression. Many obtain illicit access, as we know, from the NSA to foreign criminals to antagonistic nation-states. And it assumes that the companies want to deny access; inevitably, some CEO of AllYourDataCorp will support government surveillance and be prejudiced against Muslims or immigrants or blacks. Likely, at least one already is doing it.
IMHO, while it disrupts our plans for IT and wealth, it's absurd to think otherwise.
We are in the mainframe/terminal era of AI right now. Just like with early computing we don't yet have the local resources to do AI locally, so we're using terminals accessing the cloud. The consequence is our data also lives there. It will inevitably change and a personal AI and relevant data will live in your pocket instead (for those that would prefer a limited but more private version).
Anyone with a passing acquintance of history and the insideousness of surveillance and will not be blase about privacy or casually trade it in for trivial conveniences that hardly merit the word ai.
You many not 'personally' need privacy or freedom at this point in your life but to casually dismiss it out of hand and fail to consider its import for a functioning democractic society is beyond reckless. Its just one of those things you don't need until till you do.
And thankfully individuals aren't in a position to trade that away unless they can write a new constitution and convince everyone to get on board.
All surveillance does is compromise your society in a fundamental way, and in this case just to add to Google bottom line and ramp up Google's creepiness factor even more. That's a bad deal.
Here's the thing - we're starting to get into territory where we can actually add real value for people again in terms of helping them plan their day, and actually have real AI assistants, computers from Star Trek, what have you. The value here is pretty easy to understand. The problem is that they're all being pushed by advertising companies who make all their money by learning and selling every single bit of information about you.
I want a startup that provides services like this but treats your personal location, correspondence, and behaviors like tax returns and credit card numbers. If we can achieve a good measure of safety and privacy in our messaging apps, we can do it for this sort of data.
If it constantly needs data in order to calculate predictions, then it is no artificial intelligence. It is plain Software, like a calculator. The only reason it is called artificial intelligence is because that would be the only reason to adopt this technology.
Again, it might be neat, having a computer like on star trek, but what if you oppose your government? What if you oppose anything, and suddenly your toaster burns down your house, locks you out, locks you in, reports your every move?
Look at Manning, look at Snowden, look at Assange. They opposed and now they get terrorized by the govrnment and the software they once happily used to use. Look at how i will be treated right here by others.
I seem to remember having a lot of the same feelings during the Apple event last month. The Nike+ Apple Watch will "helpfully suggest gear" for your workouts.
Ads have become utterly pervasive, and avoiding using Google's AI isn't going to protect you from them. My Samsung "Smart" TV has ads for Hulu built right into the operating system (despite my being a Hulu subscriber at the time). Windows 10 is basically one big advertisement (at least the consumer edition).
If I have to have ads blasted in my face all the time, I'll take Google's AI-driven ones that at least stand a chance of being less annoying.
Nobody is seeing the obvious benefits here. Wouldn't it be great if we didn't have to really think anymore? Google can just tell me what to eat, what to wear, when to pee, etc. I for one welcome our AI overlords.
People will come to value their privacy more if/when they or their loved ones become victims of harassment, stalking, blackmail, or identity theft as a result of their data being abused, leaked, or stolen.
What I think is really interesting about this conversation is the total lack of conversation. Any and all data collection is either completely harmless (from the corporate narrative) or the end of all liberty and privacy (from the EFF narrative and their leech-like tech rag clickbait headlines, sup techcrunch, you're still the problem!).
There is no concept of even discussing that this might be a tradeoff or a shift in what is perceived as private. There is no consideration given to how we might still do these things that people want while protecting their data. There's no consideration for how people's lives are changed in different ways by this tech.
Nope. It's either a total gain or a total loss.
And that is the real problem here. People are applying their political bad habits to what should be a reasonable and sensitive discussion about the varying levels of tradeoffs we should be willing to give and what the net good we can extract from this technology.
A great example is street view. Street view ultimately has enabled extremely detailed and powerful navigation, complete with a ton of ways to do real time traffic detection. Most people using apps that benefit from this data would say that's a net good, and in general as the tech evolves and traffic distributes more efficiently then urban environments see a similar positive effect.
Of course, the tradeoff is that I can scan a snapshot of your street and if you were there playing football with your kid, walking your dog, or publicly exposing yourself then minus your face I'm going to be able to see all that.
What makes these kind of issues even less clear is that street view enables self-driving car technology (we need the detailed and constantly updated nav systems for them). Self-driving car technology has the potential to totally transform some neighborhoods, has massive potential for assisting disabled people, can completely change the way we ship goods and thusly preserve oil and energy resources for generations to come. But it also has the potential to be a new way for the upper and rich classes of the world to completely cut out service industries and further alienate the economic middle and lower classes.
Why is this meaningful? Because if we don't talk about them then we can't help shape them. If we understand the implications as a society and demand commensurate good from these private industries then it can be an incredible boon to our societies. If we don't, then one of these extremist sides will win and all options for a middle ground where we get benefits and have tradeoffs will be excluded.
The scope of Alphabet’s ambition for the Google brand is clear: it wants Google’s information organizing brain to be embedded right at the domestic center — i.e. where it’s all but impossible for consumers not to feed it with a steady stream of highly personal data.
Unless, that is, you never buy any of that junk in the first place -- because like, who needs most, if any of it, anyway? -- and keep going on with your life. Which was humming along just fine before the IoT came along, after all.
God I am loving the justified hatred in this thread :D
I agree that in many respects the current corporate push for the Internet-Of-Things is mainly a wild-west style landgrab for self-serving integration into our daily lives.
However I don't think the IoT has to be this way.
I want all my IoT devices to only communicate to my home gateway, which would run open-source drivers for each device to provide the networking functionality. Problem solved. I don't know why this approach isn't getting more focus!
It is wrong even to ask us permission to let Google and also Apple monitor microphone all the time. If that becomes norm, there has to be hardware switch to disable microphone.
To remain competitive people will adopt new technologies. Google assistant/cloud, self driving cars, CRISPR. Consider what people gain and lose with each new technology, such as the ability to drive, a bio-engineered kill switch, or control their own hardware (windows 10).
All new technologies can be compromised. The ability to process the extreme amounts of data we are generating is already at previously unimaginable levels. Political dissidents or those who interfere with corporate interests can be identified and silenced with false evidence (pedophilia!); media control; and personally targeted DoS of finances, cloud services, etc.
This is the ability to control the world. The corporate world is disincentivized from doing anything about it, and governments don't really get it as evidenced by their hoarding of zero-days [0].
There's a war going on right now. It's terrifying, and awesome. Throw in some global climate change and our next 50 years are going to get interesting.
When the end comes I'll be that crotchety old guy who knows how to DRIVE A CAR and use a general purpose computer.
Something to think about -- making the assumption that in the future, a bunch chunk of things currently done with search engines and forms are done by voice command:
Ok Google, order new toilet paper -- order is routed to any ecommerce provider which outbids everyone else to fulfill the order.
Alexa, order new toilet paper -- order copies previous toilet paper order and goes to merchant with lowest advertised price that reports to have that specific product.
Siri supports a limited version of this today. "Siri, get me a ride" will show which ride apps are available (say, Uber, Lyft and Curb, if you have those installed)
It's interesting you assume that Google will route you to the lowest bidder. When I search for "New toilet paper" on Google right now, the top 6 results are advertisements. Why would they want to anger their customers and allow you ignore them?
Imagine the storage implications - i mean they have a hard time to store every click we make with the mouse, now they will have to store every noise and breath we make as well. I see a data store with the size of Canada.
Also how do they plan to make money besides the initial cost of the gadget? Can they push adds while driving? would be too intrusive, or is this supposed to be based on monthly payment, or a tax? Google for government! Wall-E might be needed to clean up the mess after them.
i didn't know that Sting said in 1983 that his song is really a nasty song about surveillance; at least they have an anthem for promotion purposes.
http://www.songfacts.com/detail.php?id=548
Now i really don't think that personal assistants are going to be a success. They do descriptive modelling based on what you do, there is no way to evaluate if the suggestions are any good. Without such an evaluation they can't do reinforcement learning. Also they might suck in too much data - that would make it harder to make meaningful suggestions.
I'm always confused by articles bemoaning the AI and tech revolutions, written in magazines that expound new tech revolutions.
I understand Apple and the EFF are staunchly against merging products databases involving the same user's data, but for me this is an essential feature of the google ecosystem. I can ask for traffic and have directions appear on my phone while driving, play movies on the tv that i am looking at instead of my phone, audibly alert me to meetings while at home or work, and turn the lights on and off in the place that I am.
I don't think they are misleading people, the mute button pretty strongly implies the duality that you can't hear and un-hear things post-hoc. In addition they dont hide the fact that you are talking to a computer at a company by obfuscating it with some quasi-futuristically named caricature.
As is often with these article "always listening" is far more misleading, an embedded keyword processor is listening for keywords, and only if they match the phrase "ok google" are they sent to google servers. otherwise it just sits there sharing nothing.
I'll note that my iPhone can do all those things but without my data leaving my own hardware (or at least not unencrypted). You don't _have_ to ship all your data offsite for the functions you enjoy. It's just that Google would rather examine all your data in-house, presumably for their own benefit and not for ours.
With faster multi-core CPUs, much more RAM and SSD, smartphones are probably powerful enough to run a privacy respecting AI assistant locally: watch your web access, phone access, location, and do all processing locally with no data leakage. Ideally it would be open source, or at least from a trusted company that made money only from selling the app, and not off of our personal data.
So I see both sides of this privacy/AI debate going on and am wondering, what is the solution? Is it technically feasible to create a useful future AI without compromising privacy? Because if not, I fear for our future. While I agree with what everyone on the privacy side is saying, I believe in the long run the consumer value AI will win, leading us down this path.
Ok, so no personal assistants and high-tech homes for anyone. Done. There, you happy now? Because if Google isn't being fed all that data, it can't provide those things. You either have it or you don't.
Edit: I've made this point before, but your data is a currency. Spend it wisely (or even not at all). It's up to you.
It's fascinating to think about how exciting this technology is -- passive monitoring by helpful AI that can drastically increase convenience and efficiency.
But we've seen that Google is happy to turn over massive amounts of customer data to government without a warrant and without alerting customers to the practice, which makes the technology seem ominous.
First the GPS, then the microphone, then the camera, accelerometers, 3D touch sensors, etc. Gait, affect, and all sorts of factors will be able to predict criminal behavior before it happens.
Let's hope the next generation of tech giants will take customer privacy and freedom seriously and avoid the dark patterns and privacy violations of the current era.
Only now, when it's likely too late, can we actually get a glimpse of the sort of Orwellian dystopia that so many have warned about in decades past.
All data generated by a user is encrypted and stored in the cloud with the decryption keys on the user's device. This way, the service provider (eg Google) can't read your data. The major advantage of this approach is that the user is in complete control of the data. The drawback is that service providers and AI systems will be starved of data that enable targeted ads/recommendations.
A good middle ground might be to offer users an option : 1) Give us your data 2) Pay up and we will not collect any data.
Thinking deeply about the state of the internet, I think we have to move towards a model where users pay for services they use if privacy is a concern. As it stands, a lot of services offer free services in exchange for our data which is monetized through ads.
What I'd like to see is some good open source software that can compete with Google Assistant, Siri, and Alexa. It looks like there are some promising projects, but nothing turn-key yet. I'd like to be able to simply apt-get a package, and have voice recognition on my box.
I think it's right for people in the know to be concerned. The direction has all kinds of possible disastrous consequences. But it also has lots of amazing dual-use possibilities for making our lives more fluid, and technology more magical. You can't deny both.
Trying to take some big stand against it I don't believe will work. Look at all those who took a stand against the 2nd Iraq war -- they were drowned out. And now everyone thinks the opposite. Culture/society always pushes a particular direction until there's a very big disastrous reason to think otherwise. Until something clearly really really bad happens this is the track we're on, like it or not.
Reading this brings the thoughts to Kahlil Gibrans book the Profet. The section "On Houses" reads:
...
Or have you only comfort, and the lust for comfort, that stealthy thing that enters the house a guest, and becomes a host, and then a master?
Ay, and it becomes a tamer, and with hook and scourge makes puppets of your larger desires.
Though its hands are silken, its heart is of iron.
It lulls you to sleep only to stand by your bed and jeer at the dignity of the flesh.
It makes mock of your sound senses, and lays them in thistledown like fragile vessels.
Verily the lust for comfort murders the passion of the soul, and then walks grinning in the funeral.
I was really against getting updates about where I usually travel or other notifications that are constantly tracking me. But after a while I really like some of the tracking and notifications. Now I love when google reminds me to leave for a appointment and gives me directions, it really saves.
I do hate being tracked, but I have slowly started to like the connivence of it. Having all my information on Facebook, Linkedin, Instagram, and everything else privacy has really gone down. If my Nexus 5x can save me time, I will sacrifice some privacy.
> We are excited about building a personal Google for everyone, everywhere
Not a problem if it run on an appliance at my home, disconnected from all the other appliances in other people homes. That would be a truly personal Google.
The trade off here is technological advancements in AI for lose of Privacy. Everyone can have their own opinion on this but in reality, consumers are the best for collecting data. The great thing about the grand scheme of it all is that if you don't like the privacy you're giving up, don't use it. There's always going to be the lesser known alternative that doesn't track any of your data. I don't see why articles like this arise as it's clear as day that without tracking and getting analytics, AI won't improve.
From the article: So the actual price for building a “personal Google for everyone, everywhere” would in fact be zero privacy for everyone, everywhere.
That has no basis. It is completely possible to do what they are doing by keeping everyone's individual privacy intact. And if I go by Google's privacy policy, that's exactly what they are doing. And I think it's in their best interest to keep it that way, because the day it comes out in public that our privacy is not safe with them, everyone will stop feeding them more data.
"Zero privacy" here means privacy from Google. Based on your reading of their ToS and your beliefs about their best interest, you may trust them to do no wrong and you may therefore be comfortable with Google knowing everything about your life, but not everyone feels the same.
Yes, sure, I know a lot of people who're not comfortable with sharing everything with Google, and I can understand that. Unfortunately, most of them are sharing their private data with some third party if not Google (that won't be any more careful about their privacy if they could mine the data as Google can, and probably technically less capable of preventing external breaches). In the end, it comes down to trust, one 3rd party or the other - as they're not managing everything themselves.
There's also a difference between having data here and there among many different third parties or sending everything to a single entity that can then collect in one place more data about you than what you yourself have access to.
> It is completely possible to do what they are doing by keeping everyone's individual privacy intact. And if I go by Google's ToS, that's exactly what they are doing.
Could you give examples of how that can be achieved and how the ToS goes about stating that they are doing this?
We don't know for sure if they are violating their privacy policies. But as I said, if they are doing it, and it comes out by any means, it will be the dead end of their business. So it's in their best interest to not to do that and only improve upon users' trust. So yes, there's certain level of trust I put into them, just like the trust I put into big food product companies that they'd respect the ingredients list, and have a certain level of quality and hygiene. All of them can break my trust, but it will be detrimental for them if it comes out.
> if they are doing it ... it will be the dead end of their business.
I wish I could share your confidence about that. It seems to me we've seen corporate data leaks, deceptive practices, huge hack attacks, etc and all that happens is the corporations get a temporary PR black eye -- and then spring back into action as powerfully as before.
In the case of Google, if it came out that, say, Russian hackers had breached a bunch of gmail accounts, how many customers would just up and walk away from Google? And what company could even come close to filling the void to replace google for all of us consumers?
Like I say, I wanna believe what you believe. But it just doesn't seem like that's what happens in the real marketplace.
There are two things here. One is Google secretively going against their Privacy Policy and selling my data/sharing it with other previously undisclosed parties. The other is a data breach in which Google is attacked by some very skilled hackers. If the former ever happens, I'll delete all my data (whatever they allow) from their servers, and leave them forever. For the latter though, I trust them even more than most other companies to protect my data because of their technical know-how and experience. But a data breach is always a possibility with any online company, and that will be my bad luck - so I'm taking a calculated risk here.
Thus article is rightful in raising concerns for privacy. But it's just repeating and rehashing same things. Google had been devouring data for long time.
I've long since given up hope that I can achieve any sort of privacy. Even were I to do my utmost I will still be on the periphery of others using tech, visible through their actions. The only thing I really want is for the content of my messages to be private. This can be achieved through end-to-end encryption. I push everyone around me to install and use Signal instead of Messenger or Google Talk.
I would love to have personal devices that collect my private data and apply AI for my personal benefit, but only then if all data stays inside the device under my exclusive supervision. As soon as this data is sent to some cloud service beloning to a company in the business of generating wealth and power, I totally lose my appetite.
I'm considering purchasing a Home. I already have an Alexa, which I really like, but based on what I know of Google's data and AI (and their service APIs, which I assume/hope will be extended to Home), I can't imagine it not being significantly better than Alexa.
That said, I agree with the OP's takeaway: people should be asking questions. I mean, people should have been asking these questions long ago, even as just search users questioning how Google manages to return such geospatially relevant results. But most people don't even stop to think about it, as that kind of thing is just taken for granted as the thing computers just do.
Maybe with Google's data and AI in the form of a physical, listening bot (I don't know many people who use OK-Google on their phones) will be the thing that clues people in. I'm mostly comfortable with Google's role in my life (though not comfortable enough to switch to Android just yet), but I'm aware of what it knows about me. If AI is to have a trusted role in our lives and society, people in general need to at least reach the awareness that the OP evinces, if not her skepticism.
One thing that I think is lost in a lot of the comments here is that, to a large extent, privacy is experienced, not factual. That is, in many cases, the breach of privacy is the act of mentioning something that should be private, not whether or not the system (or the person) knows that thing. This is something we tend to intuitively understand in our human relationships, but one that somehow seems to be forgotten in the design of these systems (or, at least, the conversations about them). We need good ways to tell the Google Assistant that something is private (or for it to figure it out for itself) -- even if it still possesses the underlying data.
(There are, of course, situations in which the actual existence or not of specific data is what matters, but I think those are less relevant to the success of something like Google Assistant than the perception of privacy -- and that perception is important, regardless of the underlying data.)
Once we have surrendered our senses and nervous systems to the private manipulation of those who would try to benefit from taking a lease on our eyes and ears and nerves, we don’t really have any rights left. - Marshall McLuhan
I seriously wonder if this article would have a different tone if all the same products and vision were presented at an Apple Keynote with Apple branded devices.
Would it be different? My gut says this article would have had a different title.
in WAZE they went ahead and dropped the "while using the app" option for gps privacy. it has become rude actually, so now i turn off location services instead of micro-managing the unmanageable.
I will never put an always on Internet connected microphone run by an advertising company in my house. I don't care if it literally spits out cash and cures cancer.
This is the predominant reason I use an iPhone. I don't like a lot of Apple's policies (mostly closed source, although that's where stock android has been gradually headed too) or their silly politics (Getting rid of the gun emoji? Really?), and I would rather go with Google on those grounds, but Apple's incentives with regard to privacy actually align with mine. In particular, they are a product company, not an advertising company. I appreciate their use of high-quality security technology in their phones.
What makes Google an advertising company and Apple a product company? Since both make products and sell advertising.
Obviously advertising is more important to Google than it is to Apple but at what point does Apple become an advertising company and Google a product company? Is there a revenue threshold? A branding threshold?
Edit: I'm not saying Google isn't an advertising company. I'm asking what makes Google (but not Apple) an advertising company.
Apple shut down iAd. That article isn't about Apple's ad business, although certainly some developers have in-app ads, but now they're moving away from iAd.
They're looking at search ads for apps in the app store, but that's pretty limited to just the store.
Is Apple really in the ad business, or do they have an ad division because that's just something that every ecosystem operator is expected to provide these days?
Probably if 99% of their revenue comes from advertising you can call them an advertising company, and if 99% of their revenue comes from hardware you can call them a hardware company.
Is Rooster Teeth (one of my favorite creator groups) an advertising company? They make a large percentage of their revenue from selling advertising but they also make video content.
The constraint is going to kill Google and Apple because a future competitor is going to manage to offer service without requiring the data in raw form, an example of which would be Agent Based Computation in say 15 years time. Because of that it shall become the standard. Google and Apple will be stuck because their stakeholders include parties which require the raw form even if Google/Apple did not. That opens up a good line of attack on a previously impenetrable business model.
The legal implications of visiting somebody's house should be considered. Did I sign anything saying Google or Apple could use my data? No. This opens up the door for class action law suits in some US states at least. Unless they are smart enough to be able to dodge this somehow, perhaps with a contextual local filter for new voices and a permission request made of same, but I doubt it, they probably don't have the incentive to look at the ugly.
You appear to assume that companies don't change in 15 years. Considering the past 15 years where Apple was almost dead and Microsoft couldn't ever be a company contributing to and open sourcing many things I'm not sure that premise holds.
Apple's approach to this is already different than Google's and has (arguably) hampered them a bit, but that might pay off long term and give them an opportunity to evolve, change and adapt in other ways to.
I see no particular reason either company couldn't change their tactics, regardless of shareholders. Granted evolving with technology is not necessarily easy/without its challenges but if staying with your current model is going to lose you customers, and as such data and therefor your income, why would you stick with it? If no one cares enough to stop sending them their data in the face of better/more privacy sensible alternatives, then it seems their model would still work.
> I see no particular reason either company couldn't change their tactics, regardless of shareholders.
I was talking about the government putting restrictions on the kinds of technology Google and Apple can invest in because some innovations do not suit their interests.
>Did I sign anything saying Google or Apple could use my data? No. This opens up the door for class action law suits in some US states at least.
I'm pretty sure that if this was going to happen, it'd already have happened. Next time you walk into any store take a look around for discrete black plastic bubbles.
How about for potential risks? What if thieves/criminals get a hold of that data? 'Oh he goes to the pool every day 2-3 and lives alone with no home alarms'.
I'd be amused at the concept of a hacker who also likes to perform break and enters. Once I arrived home and discovered all of my privacy invading electronics stolen, I'd probably pull out my phone and say "Ok Google, call the cops."
Ideally, the police would also have access to the data.
Can you imagine going to burglarize somebody only to find the cops waiting at the house because they determined there was a high probability of that house bring robbed? Maybe they'll even wait inside the house, so as soon as you kick the door down, you find yourself in a living room full of cops, all with body cameras recording your break-in and guns trained at the door.
And speaking of cameras... even without the cops being physically present, if you have a tight enough network of cameras, all a victim would have to do is report being robbed and the police can simply trace the burglar's movements across the camera network all the way back to the front door of their hideout.
So because Google makes phones and hardware (like it has been already for quite some time), there's somehow an even greater threat now on our privacy? I don't buy it.
I've come to believe that there is nothing that humans can do to resist the privacy encroachments of our machine overlords. Don't blame Google - it's a universal inevitability.
I'd be completely fine with sharing most of my data with Google, really. Personal and Private interlock a lot with each other, and if Google wants to give me a personal experience they're going to have to use some of my private experience.
Totally OK for me Google. I respect that people have different privacy thresholds, but I think the fact that it's different for everyone is being lost in articles like this.