I'm sorry but that just sounds like blind fear mongering. What you're saying is vague and doesn't really mean much.
It's like saying we shouldn't use prescription glasses, or medication, or cars, because it's not "us".
Humans invent all these tools and systems to improve and optimize our life. Make our vision better. Make our health better. Make us move around faster. In the case of AI, make us do perform certain things more efficiently.
Imagine it wasn't an actually computer. Imagine it was a personal secretary you had that gave you the EXACT same information. Gave you your flight information, turned on the light when you asked, gave you the weather and your schedule. Would you think that was wrong? That this isn't "you"? No, it's just optimizing your life, but now available for a wider population rather than rich people.
"Imagine it was a personal secretary you had that gave you the EXACT same information. Gave you your flight information, turned on the light when you asked, gave you the weather and your schedule."
In your metaphor, you are implicitly paying the secretary, so the secretary is incentivized to maintain your interests.
How much have you paid Google for its free services?
Your metaphor is inapplicable. You don't have a secretary telling you these things; you have a salesman trying to sell you things, and the salesman is getting smarter every day while you aren't. Not the same thing at all.
That's why I called them a salesman. They sell things. Their interests are not simply your own.
It seems to be a theme here today... a company can't serve both advertisers and customers. In the end, one of them has to win, and given the monetary flows, it's not even remotely a contest which it will be. https://news.ycombinator.com/item?id=12644507
They don't sell things. They forward you towards people who do sell things which you may be interested with. You're free to ignore it, and if you're not interested in what they're showing you, that means they failed at their job.
It's funny how bad of a stigma ads have gotten, but at the core, if you think of it, it's not necessarily a bad thing. Think of a friend recommending you a restaurant, a new game to play, a movie to go watch. In that case you'll be super interested, but now if this AI who probably knows your taste better than your friend suggests you something, you are instantly turned off and annoyed.
I think the root cause of this is that there is so much mediocre ads out there that ruin it for all. Your mind just blindly blocks all ads now.
> Google is selling you, to advertisers, quite literally.
No, that would be slavery, which is illegal.
Google is selling advertising space on various channels that you provide in exchange for Google services to advertisers.
> When you aren't paying anything for something of value, YOU are the product.
No, when you aren't paying money for something of value, you are probably paying something else of value for it; often, something that the person with which you are trading is then selling for money, making you a supplier of an input to the good or service they are selling for money.
What he's saying is that this is not "humanity inventing something to make life better". It's a company inventing something to make money.
And it's not a simple product like glasses where you pay with money and then they improve your vision. It's a product which goes far beyond your understanding and for which you don't pay money.
Google isn't interested in making your life better. What they are interested in is getting you to believe that they want to make your life better and to then recommend going to that bar, because the bar owner has given Google money to advertise for the bar.
Yes, you might actually like that bar, but Google isn't going to recommend going there in intervals which are beneficial to you. They'd rather have you go there a few too many times. Because that's what makes them money. It's not improving your life, which makes them money. Their AI will always work against you, whenever it can without you noticing.
Imagine that you were trying to quit smoking and your electronic secretary kept updating you on the cheapest place to find your favorite cigarettes? With no way to tell it not to do that.
So your issue is your secretary doing it's job poorly?
First, there is a way to tell it to not do that. With Google Now, you simply tap the menu and say "No more notification like this". With the assistant, you will probably be able to ask directly.
Second, let's be honest, humans fail pretty often too, so that's just a weak argument.
Lastly, I think it's unfair to dismiss a new technology just because it could maybe fail, without having even tried it.
How about if the system is working exceptionally well, you're a depressed person, and the next ad you see is auctioned off between a therapist, a pharma company, and a noose supply store in the 100ms it takes to render your MyFaceGram profile?
The awful success cases are far more interesting than the awful failure cases.
I have no problem with ads for therapists or pharma companies competing for advertising space in front of me because they have algorithmically determined that I am a qualified lead. That actually sounds great from a mental health perspective.
I think that algorithms, and AI specifically, are perfectly able learn what distinguishes those. Maybe even better than someone who might not be in their best state of mind.
Because the whole of Google's ad business stands on people wanting to click on the ads shown, and buy the products offered through them. That's why they spend resources on detecting misleading or fraudulent ads, which by your reasoning they wouldn't care about as long as they paid. PR is very important for this business to be sustainable: If the goal was for every user to click through one ad, and then never again, that might not even pay one engineer's salary.
What's misleading or fraudulent about those ads? Maybe you mean "morally reprehensible," in which case I ask where you draw the line between the morally reprehensible (auctioning off the method of suicide to a depressed person) and the morally questionable (say, auctioning off the final bankrupting car purchase to a financially irresponsible person)?
Detecting misleading and fraudulent ads is just an example of things they wouldn't spend resources on, if following your reasoning of "short-term money is the only thing they care about."
There's not only the "morally reprehensible" metric ("Don't be evil"); there's also the "absolute PR catastrophe" metric that printing such an ad for a rope would mean.
> So your issue is your secretary doing it's job poorly?
I think the real issue is the casual deception which you just fell for: It isn't "your" electronic secretary, and the thing it just did might actually be a "good job" from the perspective of those who control it.
I'm not saying we shouldn't use AIs. We should, however, think about how we use them.
To build on your example, what are the dangers of having a personal secretary on the payroll of anyone but you?
What I am expecting from this is a super devious filter bubble - because that's how you make money. Google's old slogan "Don't be evil" is long gone. "For a greater good" might be more on point.
>In the case of AI, make us do perform certain things more efficiently.
What does the Google Assistant help me do more efficiently? In all honesty, I can't figure it out. I don't need or want a secretary, and I can do written planning for myself.
I need less paperwork and fewer web forms and identities, but the Google Assistant only promises more of that crap.
I'm never buying one. It's a sacrifice of privacy for zero to marginal gains in convenience.
Ignoring your derisive tone, the statement "most people get through their daily lives just fine without it" applies to every new technology. Yet here we are, typing away on the internet.
It's like saying we shouldn't use prescription glasses, or medication, or cars, because it's not "us".
Humans invent all these tools and systems to improve and optimize our life. Make our vision better. Make our health better. Make us move around faster. In the case of AI, make us do perform certain things more efficiently.
Imagine it wasn't an actually computer. Imagine it was a personal secretary you had that gave you the EXACT same information. Gave you your flight information, turned on the light when you asked, gave you the weather and your schedule. Would you think that was wrong? That this isn't "you"? No, it's just optimizing your life, but now available for a wider population rather than rich people.