I think AI researchers and engineers tend to get too carried away with decision making, when the more valuable service is about communication of refined knowledge, which if I'm not mistaken is exactly your point. The problem has nothing to do with "how can a machine guess the right answer" but instead is all about "how can a machine refine all the options based on the intentions expressed thus far".
Anecdotally, if we'd ask a real person "where is a good place to eat" the chance we'd go there without more information is slim. And if we don't even trust people, trusting Siri will be a while.
What we're really doing with these questions is making our hunger known, and starting a conversation. We actually don't care that much about other people's thoughts, and we may not even have anything in mind yet as far as where to eat. We do care about how people feel if they are someone we care about, but the thinking part we love to do ourselves.
So to offer a service that "thinks" is rather misguided, and may even constitute a disservice. We already rejected the talking paperclip in 1996 [0]. It's failure wasn't it's intelligence, but in the value proposition itself. To have a paperclip presume to know better and to tell you what to do was not tempting. It's failure was it's existence.
Is it a glitch in the Matrix or is their pitch for Cortana identical?
> What is Cortana? Cortana is your clever new personal assistant.[1]
If I ask someone I know what's a good place to eat, the odds are actually quite high that I'll give it a try. I wouldn't have asked otherwise.
The issue here is one of trust, which is built on an individualized relationship over time. When I ask someone I know for a recommendation, I'm doing so because I already have a sense of their judgment. That's more the key here- build a history of reliable judgment. That's the goal.
Right. This is certainly one path and the path most seem to be on, and exactly the one that needs to be challenged. The key intuition here being that a judgement, which is a decision, is not an answer to a logical problem. A decision entails a will, and when our personal will is overridden by an animated paperclip, we close said program. Decision != Answer.
People don't necessarily want decisions made for them, but rather, they want assistance in making their own, or better yet, reasons to justify the decisions they've already tentatively made. "Reliable judgement" is the complete opposite of "a resource of intelligence". Certainly all of these assistants feature a little bit of both, but I keep sensing the urge towards the former. Worse yet, a decision is often treated as an abstraction that somehow justifies hiding everything that went into that decision, even though there is immense value in actually being told why. People have entire conversations over why to eat at some place as part of the process of sharing the decision to go there.
Even when used only as a resource, if only these robots wouldn't keep trying to read our minds or insist on telling us what to do. Maybe a handful of people will accept a robot's choice, but everyone loves more information.
Maybe we shouldn't be looking for some secret sauce that enables robots to make better generalized and rational decisions than humans. Maybe we should be building robots capable of assisting humans at better making their own personal and irrational decisions instead?
> Is it a glitch in the Matrix or is their pitch for Cortana identical?
No, its not a glitch that Siri, Google Now, Cortana, and now M have essentially the same pitch -- they are direct competitors intended to attach people to their respective platforms.
Anecdotally, if we'd ask a real person "where is a good place to eat" the chance we'd go there without more information is slim. And if we don't even trust people, trusting Siri will be a while.
What we're really doing with these questions is making our hunger known, and starting a conversation. We actually don't care that much about other people's thoughts, and we may not even have anything in mind yet as far as where to eat. We do care about how people feel if they are someone we care about, but the thinking part we love to do ourselves.
So to offer a service that "thinks" is rather misguided, and may even constitute a disservice. We already rejected the talking paperclip in 1996 [0]. It's failure wasn't it's intelligence, but in the value proposition itself. To have a paperclip presume to know better and to tell you what to do was not tempting. It's failure was it's existence.
Is it a glitch in the Matrix or is their pitch for Cortana identical?
> What is Cortana? Cortana is your clever new personal assistant.[1]
--
[0] https://en.wikipedia.org/wiki/Office_Assistant [1] http://windows.microsoft.com/en-us/windows-10/getstarted-wha...