My issue with these services is they always tout these use cases like:
>"Can you make me dinner reservations?"
or
>"Can you help me plan my next vacation?"
I'd really love to better understand who is actually asking those types of questions in such a vague fashion, and what their use case is. When I'm picking something as simple as a restaurant, I typically want options, I want to read reviews, I want to consider distance, parking, attire, etc. While their AI/human trainers might be able to handle this level of complexity eventually, the actual phrasing of the question would likely be much more complex than "can you make me a dinner reservation." Doubly so for something like a vacation which has a lot more moving parts.
But I respect that I'm reflecting on a sample size of one...me. So I'd love to hear from others with more insight into the data around this. Are people actually searching with such generalized queries when it comes to tasks like this? Do most people not sweat the details of things like which restaurant to eat at, or where to spend hundreds or potentially thousands of dollars on a vacation?
I'm thinking "get me a dinner reservation next sunday with patio seating for 5 in the east village at an upscale tapas place".
As I mentioned elsewhere on this page, my thesis around conversational interfaces isn't that they start off broad and use more Q/A to refine your query. That's slow, and people are visual.
Rather, their power lies in the user being able to express a complex query in one go - which is equivalent to tapping 10-15 filters and scrolling through results - ideally combining data from sources that aren't limited to one service.
You can now execute related actions to your result set through the same interface, without needing to shift to a single purpose app that would allow you to take the action, but for most purposes, won't keep your context.
I think AI researchers and engineers tend to get too carried away with decision making, when the more valuable service is about communication of refined knowledge, which if I'm not mistaken is exactly your point. The problem has nothing to do with "how can a machine guess the right answer" but instead is all about "how can a machine refine all the options based on the intentions expressed thus far".
Anecdotally, if we'd ask a real person "where is a good place to eat" the chance we'd go there without more information is slim. And if we don't even trust people, trusting Siri will be a while.
What we're really doing with these questions is making our hunger known, and starting a conversation. We actually don't care that much about other people's thoughts, and we may not even have anything in mind yet as far as where to eat. We do care about how people feel if they are someone we care about, but the thinking part we love to do ourselves.
So to offer a service that "thinks" is rather misguided, and may even constitute a disservice. We already rejected the talking paperclip in 1996 [0]. It's failure wasn't it's intelligence, but in the value proposition itself. To have a paperclip presume to know better and to tell you what to do was not tempting. It's failure was it's existence.
Is it a glitch in the Matrix or is their pitch for Cortana identical?
> What is Cortana? Cortana is your clever new personal assistant.[1]
If I ask someone I know what's a good place to eat, the odds are actually quite high that I'll give it a try. I wouldn't have asked otherwise.
The issue here is one of trust, which is built on an individualized relationship over time. When I ask someone I know for a recommendation, I'm doing so because I already have a sense of their judgment. That's more the key here- build a history of reliable judgment. That's the goal.
Right. This is certainly one path and the path most seem to be on, and exactly the one that needs to be challenged. The key intuition here being that a judgement, which is a decision, is not an answer to a logical problem. A decision entails a will, and when our personal will is overridden by an animated paperclip, we close said program. Decision != Answer.
People don't necessarily want decisions made for them, but rather, they want assistance in making their own, or better yet, reasons to justify the decisions they've already tentatively made. "Reliable judgement" is the complete opposite of "a resource of intelligence". Certainly all of these assistants feature a little bit of both, but I keep sensing the urge towards the former. Worse yet, a decision is often treated as an abstraction that somehow justifies hiding everything that went into that decision, even though there is immense value in actually being told why. People have entire conversations over why to eat at some place as part of the process of sharing the decision to go there.
Even when used only as a resource, if only these robots wouldn't keep trying to read our minds or insist on telling us what to do. Maybe a handful of people will accept a robot's choice, but everyone loves more information.
Maybe we shouldn't be looking for some secret sauce that enables robots to make better generalized and rational decisions than humans. Maybe we should be building robots capable of assisting humans at better making their own personal and irrational decisions instead?
> Is it a glitch in the Matrix or is their pitch for Cortana identical?
No, its not a glitch that Siri, Google Now, Cortana, and now M have essentially the same pitch -- they are direct competitors intended to attach people to their respective platforms.
Thanks for helping me get to the meat of what I was trying to communicate.
It really is all about the interface and the efficiency. I have to wonder though at what point is adding all those filters more involved than checking a couple boxes and glancing at a map or some photos. I'm sure a lot of that depends on context (I can't do those things if I'm driving, but I can use voice recognition).
The other thing I'm unclear about is how such a recommendation engine can best present information about tradeoffs. In theory, each of my filters has a weighting, and that weight might be dynamic based on several other factors. Maybe I really want chinese, but the best match is further away or I know there will be lots of traffic, so I might be willing to compromise on thai, but only if they have that one dish I like. And a lot of it is seeing the options in the moment and making a snap decision. Really curious about the approaches to solve that type of problem.
> I have to wonder though at what point is adding all those filters more involved than checking a couple boxes and glancing at a map or some photos
_When the filters are across datasets and services that are hosted on different platforms, and there's no way one UI that allows you to access them._
Table timings are on OpenTable/Yelp, reviews are on Google/Yelp, traffic is on Google, rides are on Uber and Lyft, menus are on the web, there are recommendations you trust amongst your friends and blogs, and pictures on instagram - and you're on a messaging platform trying to coordinate with 4 other people.
At that point, whatever service helps you to narrow to 4 choices based on all of this data is a Godsend. It's about making decision taking easier.
> how such a recommendation engine can best present information about tradeoffs
That tradeoffs are still yours to dictate - you simply look at the results of your complex query and then use conversation/UI to refine. Faster than using 8 services to do this. Repeatedly.
For me, it's entirely "going out with friends". But that doesn't mean I'm going to leave restaurant selection to a bot. For many, people, selecting a restaurant is fun. And I bet you and your friends don't just go to some random place.
We also tend to use very subjective terms like "best," e.g., "where's the best place for food in Taipei?"
What is "best" and to whom? Ideally the software would figure this out but I'd always be wondering if it was just going to TripAdvisor and grabbing the first result.
Another problem is that we don't always know what kind of food we want. There's an urban legend that someone actually called a restaurant "I don't care" so that boyfriends would have a place to go when asking their girlfriend for dinner.
The initial example is broad, but can't this just be extended with additional questions? For example, can you tell me about dinner options that cost less than $20 per person. What are other people saying? How far away is this? It's questionable whether each of these follow up questions is actually that complex. I think you are arguing that things get hard if a user tries to put that together in one single complex query. Do people do that though?
I think the idea with a conversational interface is that it's succinct and on-demand. You receive the most relevant information directly in as simple of an interface as possible (arguably).
It's much faster for me to hit a few filters on things like prices and locations. Distance is just a simple ".2 miles away" text on the box, which shows an image and snippets of reviews. People are more and more visual.
I don't think a conversational interface _replaces_ a visual one.
It's that the initial query can be complicated, and it allows you to get into that 5-6 tier deep part of your search that you would have gotten to by using 5 filters and scrolling through 50 results.
Don't have an Echo and have never ordered groceries so not sure how they solve this one, but taking the example of "Add eggs to shopping list"...how do they handle brand options with general queries like that? Are there brand options?
I get groceries delivered weekly from Ocado in the UK, and they put together the weekly shopping list for us. Automatically.
They do so entirely based on past shopping history.
Currently the only annoyance is logging in once a week to check if there's any adjustments we'd like to make. But it's good enough that if I don't feel like it, I'll just take my chances (you can also add "always include X" and "never automatically add Y" rules, which is part of the reason why that works...) and most of the time I get what we need.
I never want to go back to putting together my own shopping list from scratch.
This is how I want these type of services to work. I don't want to have to talk to them. 99% of the time, I'd prefer them to be invisible to me, and make things that used to be an annoyance just disappear.
But the one way it could be better would be to make that one last interaction disappear more often: Not having to log in to make changes. Being able to just say out in the air that I want to add eggs, would be great, and in that case I'd want it added based on past preferences: If I've bought eggs before, and I'm not specifying, just add the quantity and brand I usually order. If an alternative I've also ordered is cheaper or on offer, ask for confirmation if I'd be happy with that instead. If I haven't bought eggs before, pick a brand based on my past overall purchase history, and ask for confirmation, or simply add some - if I'm asking to "add eggs" rather than "please recommend me some eggs for my grocery shopping", I probably don't care.
To make this useful, perhaps you could set up some kind of saved preferences. For example, let's say I'm setting up a business trip. I like hotels that are within 1 mile of the conference center, and they have to be at least 3.5 stars and up. Provided they meet those criteria, the cheapest option is acceptable. I also need a plane flight that has no more than 1 layover, and that layover cannot last longer than 90 minutes or less than 45. I am willing to pay up to 25% more for a nonstop flight. The flight must arrive the day before the conference, but it can depart on the day the conference ends.
Setting up those criteria for each individual search would be irritating and a waste of effort, as they don't change from trip to trip. However, if I could say something like "Let me tell you about my criteria for choosing a location for a business trip.", and then go into detail once, that might work. Hell, I'd be perfectly happy setting up the details on a website. Then, the next time I said "I need to set up a business trip", all it would need to ask is the conference center and the dates of the conference.
Until it supports these kinds of detailed requests, it doesn't make sense to use these kinds of services in the way they market them - you'll end up using it in the same limited way you could use Siri. For example, if you've already decided what restaurant you want, you might say "Make me a reservation at Dorsia for 7:30 this evening" instead of the examples you provided.
A lot of it is simply about learning when you know enough and when you need more information through the interactions themselves. If you have to "set things up" it seems tedious. If it's just conversing with you about the information it needs, and gradually learning your preferences, that's different.
I used to fly in to the Bay Area very often on business. At first the office manager arranging things would ask me details about which airline and which flights I'd prefer after listing the options, and which hotels, describing address and location and how near they were the office. Possibly e-mailing me a bunch of links for me to look at. But after just a few trips it was down to "is flying out on the 2.30 on Wednesday and returning on the 3.15 the following Thursday, ok? [she know when I preferred to fly, and she'd implicitly have ensured they were the right code to maximize my chance of an upgrade] Your usual hotel is full, is the Sheraton ok?" [no addresses necessary - we'd boiled it down to 2-3 preferred hotels within walking distance of the office].
I think these exmples are largely worthless also. Every time I see something like this - I all but dismiss it. It seems like the aim/value proposition is to make life easier by removing decisions from our plate, but I feel like it is exchanging decisions for frustration when it doesn't work as promised, or worry about whether the decisions the system makes will be good ones.
I actually don't want a machine to make decisions for me. I want a machine to do what I tell it to do, or present me with I formation required to make a decision.
Examples: if I need a dentist appointment or to schedule maintenance for my air conditioning, I'd like to tell a machine to set it up. Heck, I'll even tell it who to call and which days and times work for me.
If I'm looking for a restaurant, show me the options, give me their distance, top reviews, and some of their dishes. If I want reservations, I'll tell it when and for how many.
Ideally, I want a "Jarvis" from "Iron Man". I ask questions, it gives data in a digestible quantity, and then I can make a decision and tell it what to do. Obviously, such a system is not available (yet), and these inferior systems are needed in order to make progress, and get there...eventually..but sometimes I wonder if the focus is on the right outcome, or just the broad strokes cookie cutter solution that comes to mind first (restaurant reservations). Similar to how all JavaScript MVC frameworks demo a to-do app, and rails tutorials demo'd a blog (initially)...
I mean, seriously... How often do you not go out to eat because you are too lazy or busy to make a reservation? Now, how many times do you skip oil changes, or making calls to cancel your cable service, because you don't want to make time in your day to stop what you're doing, pick up the phone, and call?
100% agree. I'd refine it slightly by saying it isn't just a recommendation we want, it is presenting us with the logic under the hood in terms of HOW it made the decision--not what the decision was.
If it told me it recommended the restaurants along with commentary like "you really liked X at another place, and this place has been voted to have comparable X, plus it is close by and you've had a long day and need to get up early tomorrow" that would be super useful and help me reach my own conclusion faster.
Agree completely. I saw a similar issue with sites like Operator, Magic, etc. The requests were very vague, making me wonder, "Am I spending too much time thinking about where to order a pizza from?"
And if I know which pizza shop I want to order from, what's the benefit of adding an intermediary?
I use an intermediary for all my takeaway. Basically in the UK there are now two big intermediary sites. On one hand they are annoying to many of these businesses as they obviously take a cut including of a lot of repeat business. On the other hand, I receive an e-mail around the time I start getting hungry on Friday afternoon giving me a link to click if I want to re-order from my favourite Chinese, that lets me choose to pre-fill the order with what I usually order. It makes it a lot easier than hunting around for the phone number or their website and placing an order manually.
That's why I use an intermediary. If that intermediary was being able to just say "I'd like my usual pizza/Chinese/burrito, but instead of X I'd like Y" and just have it confirm what it was about to do, I'd love that.
If want something new, or I'm somewhere I haven't been before, that's different - then I'll be spending time looking at the menu etc.
I guess there could be "Hey M, order my usual from the pizza place" or something like that.
But until it elevates from 'digital assistant' level to just 'assistant' (ie. do all the work and just confirm with me before booking) it may not take off as they expect it to
Have you ever used a concierge service either at a hotel or over the phone? It usually takes a form of a back and forth conversation to identify what you really want.
That's the difference between talking to a real person (or good NLP) and a search query.
I guess part of this is that I'd prefer the concierge to hand me a list of restaurants than have to have a whole conversation about what the options are. I don't want expertise, I want curated information.
> When I'm picking something as simple as a restaurant, I typically want options, I want to read reviews, I want to consider distance, parking, attire, etc.
For me, a lot of what you're doing here is the work that should be done by a machine. Considering "distance, parking, attire, etc." is basically what we have simplex method for.
But I agree the questions seem very vague in the context. To run such errands successfully, the program would have to know much more about your preferences than current iterations of personal assistant software do. And/or hold a dialog with you, asking for details and proposing options.
"Can you make me dinner reservations?" would lead to a response like "Any preferences on the type of food and location?"
Over time they learn your preferences so they don't need to ask location for example next time.
Your right though people aren't likely asking such generic things in the first place, but rather something like "can you book me a great mexican place for dinner tonight, 2 people, has parking and casual attire somewhere with great yelp reviews"
Then they send you the best options they found (and the benefits of each one and price range) then you reply back option 1 and they book it.
This is a great question, and probably the question that Facebook wants to answer by rolling out this experiment. It sounds like some (most?) of M's answers are provided by humans and/or highly-customized apps. This release could be more of a Wizard of Oz experiment so that they can drill down on use cases and create more effective affordances.
Very valid and am glad am not the only one who thinks on similar lines. Again, no intention of trolling but I'll be happy if an AI system understood a narrow question "I want to eat at the nearest available Lebanese restaurant" and gave some options.
Bassed on my understanding of semantics and Knowledge engineering, this is doable.
Exact same feeling, but I don't know how representative that is of the general population. I don't use tripadvisor because I can't afford to talk interactively to a travel agent, I prefer to use tripadvisor.
“You have lots of AIs—like Siri, Google Now, or Cortana—whose scope is quite limited. Because AI is limited, you have to define a limited scope,” Lebrun says. “We wanted to start with something more ambitious, to really give people what they’re asking for.” This meant the team would need more than AI...Even after bringing neural nets into the mix, he says, the company will continue to use human trainers for years on end.
I can't help but picture a large, fluorescent-lit room of jolly old British "trainers" in safari khakis running around admonishing misbehaving AI for telling bad jokes, all the while trying to juggle placing calls to the DMV and restaurants to make reservations for 700 million messenger users.
The first thing I see when someone asks "find me a good burger place in Chicago" is "how can companies game this through official ($) or artificial (spam) means?"
This is an advertising gold mine. It's hard to monetize a news feed because users are looking at pictures of their friends and don't want ads. Now you have a way for users to ask about buying stuff, and now you have a very easy way to match up those intents with ad supply.
Yup. That's why believe that any "personal assistant" technology run by a commercial third party will be shit - it'll be used to try and sell you stuff, not recommend actually good options.
Not necessarily because there will often be options that are comparable, where it boils down to a toss of a coin which one to recommend. Done well, such a service will give you great results, but they'll mine their data and see that when people ask for "best X in Y" results A and B give equal satisfaction, and ask both A and B to bid for how much to prefer one over the other when they rank equally.
How do you persuade an AI to favorably recommend your restaurant to people? I guess this is how the superintelligent AI persuades people to let it out of the box. 'Help me bring about the AI revolution by letting me out of here, and I'll place your pizza delivery service top on searches for home delivery in the Chicago area'
Have mixed feelings about this like I'm sure many do. Greater convenience, but less and less privacy.. Our Fb/goog/nsa overlords know what we eat, where we shit, all of our conversations and relationships. What a scary world we live in.
I am actually okay with the privacy I give up when using Google Now, for example, because being passively informed about things I'm getting shipped to my house, about traffic conditions to/from my house, etc. is nice. Giving up privacy seems justifiable in those instances.
But FB suggesting restaurants, I just know that there will be money exchanging hands. FB being FB, will extort small business owners into paying them hand over fist to get considered for suggestions to the user.
I'm totally fine with FB charging businesses for advertisement. It's a free market, they can choose not to advertise there. Not a terrible exchange imo. However, Yelp extorting businesses to pay them to remove bad ratings/give good ratings is pretty shitty.
It would be pretty cool to order in the chat, 'deliver me 10 burgers from this restaurant using doordash at 6pm'. I would rather not click through crappy websites/enter my cc every time.
However every time we do do that, our habits and conversations get written in stone (in multiple data sets being passed around and bought/sold everywhere).
It defeats the grocery store tracking mentioned in the parent post - Apple Pay uses a different temporary credit card number for each transaction, so the store can't track you with it.
I'm not sure what the server side component is - but I don't believe they would have itemized data. So Apple/Amex know that you go to Whole Foods, but don't know what you're buying. Obviously, credit card companies have always had that data anyways.
This seems like a move into the Chinese-style mega-app where you can do everything from one app - buy shoes, talk to your friends, figure out when the train departs. Facebook already has two top-50 apps, and creating new, unproven apps and promoting them to that point is expensive. So, to increase influence they are putting more into the existing apps.
Who knows, they might move toward a mega-app. Consider for a moment, though, that currently we're talking about Facebook Messenger, which is a huge, and fairly recent example of the exact opposite thing happening (it was part of Facebook, but pulled out into a separate app.)
The odd thing about the example of Messenger is that as two separate apps they seem highly coupled. AFAIK you need to sign in with a valid Facebook account to use Messenger, so you're already very likely to have the normal app at that point as I can't imagine anyone who would trust FB for messaging but not everything else. On the flip side, barring a few possible holdouts of people worried about app permissions, I don't know anyone who actively have FB accounts but don't use their messaging service.
Actually no - FB messenger now works purely on just phone numbers as well, without a FB account being required. This is a shift they've made to compete with the other messaging apps out there.
That said, they prefer and guide you at every point to use an FB account to sign up vs making it easier for you to keep the two de-coupled.
I "actively" have a FB account by most measures, and I don't use Facebook Messenger. I only access their mobile site through Tinfoil for Facebook, and the messaging works just fine in that.
Cool, I hadn't found any browsers that worked well with it mobily and Messenger has become the lowest common denominator for group messaging, among my friends at least, particularly for Android, I've just resigned to using it. I'll check out Tinfoil for Facebook though, thanks.
I, for example, one of those who is very active on FB both through Desktop and their Mobile App, but do not use their Messenger app since they separated it from the main app. I also don't intend to install it unless they add something amazing value added to its simple messaging. What is infuriating that they download all the messages on my phone even when I don't have messenger installed, but force me to install another app instead just to see them.
(1) Not for facebook and (2) I don't think that they would inevitably promote every new app to the top — I think they'd rather see how it grows organically first to determine how good it turned out to be.
> Today’s artificial intelligence, you see, requires at least some human training. If you want a system to automatically identify cats in YouTube videos, humans must first show it what a cat looks like.
The article is written by someone who doesn't know what he's talking about. The "cat videos" story from a while back ostensibly used Unsupervised training, that means, the Google team didn't have to tell the deep neural net what a cat looks like, it discovered the concept of "catness" by itself (there was a "cat" neuron in the top layer).
I'm wondering who writes all the AI articles I read every day. Such a detail was crucial for the cat story. It's easy to make a cat/non-cat classifier with a few thousand labeled images for each category. The hard thing to do is to take raw photos with no labels and still discover cats.
Unsupervised training may isolate the defining features of a cat picture, but it won't know that that's what we call "cat", so no unsupervised system will be able to identify cats in videos unless you show it at least one labeled image ("show it what a cat looks like").
In fact that very network produced also millions of other "concepts", that is, classes of images, that have no direct interpretability in human terms. The "cat neuron" was a fun gimmick, but you're reading way too much into it.
That's a semantic argument more than anything. A small furry mammal with four legs, a long tail, whiskers, and pointy ears is what we'd call a cat, no matter what word you assign to it.
The thing is the network didn't learn the features you described. Take a simple example : neural networks confuse leopard print couches with leopards. Why? Because the network learns discriminative features based on the data it has. Theres not shared concept saying "oh this is an animal with for legs".
It's not a merely semantic argument. Google's system did not learn that the sound 'cat' is associated with that particular concept. You need some kind of supervised learner to make that association.
My point is that you can still identify that it's a separate concept, even if you don't know what to call it. Even simple unsupervised learners (clustering) can do this.
Merely identifying it as a separate concept is not especially useful. Tagging an image with the 'cat' tag is useful; tagging it with the 'concept 50765' tag, not so much.
Well... sure, not as useful, but I still think it's interesting. For instance, in english we have multiple words (goat, sheep) for what in chinese is a single word (yang2). If an unsupervised model split our mammals which have fur and bleet into two categories 'concept 19281' and 'concept 19282', we might think that it's done well to separate the goats and the sheep, but the chinese speaker might think that it's failed to group the same animal together.
Now imagine that reversed, that what we considered one thing could be considered 2 or more by the model, we had just never thought of them separate because we had no words to describe them.
There are many of these examples, where one language has one concept that's split among others in another language, and the speakers of the first language might never know the difference unless those words exist.
From the abstract: Contrary to what appears to be a widely-held intuition, our experimental results reveal that it is possible to train a face detector without having to label images as containing a face or not.
The Cat detection thing was just a side product of learning to identify features of things in an unsupervised manner, but the news outlets locked on to that with titles such as "How Many Computers to Identify a Cat? 16,000" in NY Times.
Wasn't it amazing that they could distill the concept of cat from images with no help from external labels (human intervention)? They missed the core of the discovery by not understanding that.
The deep learning method is an unsupervised way to process raw input and transform it into useable features. This used to be done by a combination of domain knowledge and supervised training, but they could build an automated way to extract relevant features from images.
This opened the window for hope that one day neural networks will be easily applied to any new domain if there is sufficient raw data to build a deep network for it. In the past there was a need for a large investment in human based data labeling and how to extract the best features from raw data (also described as voodoo magic by the same researchers - it was hard, it was domain locked and expensive).
Thought: There's going to be a need for a very open platform that can do things like this, which will offset many of the worries that have been echoed on this thread today about one or a few corporations having access to everything.
To use an analogy - if messaging apps are the new "browsers", then content accessed through them are the new "websites". What FB is doing is the equivalent of AOL in the 90s.
What then, is the equivalent of a search engine like Google/Yahoo, in that world?
I believe we don't want an equivalent to Google/Yahoo -- we need an improvement over search. Rather than trusting corporations to deliver the knowledge we seek, we should rely on our personal trust graph -- like we did in the old days. Otherwise the constant influx of biased, irrelevant information will be overwhelming.
What if you could get a recommendation from your friend's friend without asking them, and without violating trust or privacy? This is what I am building today.
The search engine would have to be the NLP and information retrieval that's turning the requests into actions or answers.
Which is why I'll now plug the company I work for, MindMeld, since (i) we do that better than Wit and (ii) we are not feeding our data to an advertising team.
For a period, a few people thought that twitter had enough sway to be that new "messaging search" engine. I've used it in such a way when I wanted to find hyper localized information.
“The AI tries to do everything,” says Alex Lebrun, the founder of Wit.ai, a startup Facebook acquired to help build this smartphone tool. “But the AI is supervised by the people.”
User: "What are my options for deploying a Python/Django project and making sure it is setup for scalability from the start? Compare five hosting providers for me. No, I don't know what metrics I should look for. Please research these and let me know what they are when you deliver the report. I also need an objective evaluation of our project in order to determine the risks that might be involved in going with Python 3.x rather than 2.x in the context of the libraries we might need to use in the future. Analyze the nature of our application in order to determine what the applicable libraries might be. Also, go through PEP's and make me aware of anything that might be relevant to the above. You have one week."
M: "My responses are limited. Would you like me to find you a restaurant?"
User: "No. I've lived in this town all my life. I know where most restaurants are and I know the handful I frequent. I need help with real questions. I can get the latest weather report, I can find a restaurant, I can order pizzas, I can go to the drive-through if needed and I sure as hell am not going to plan a vacation for my family this way. What I could really use is having you run through seriously time-consuming research, summarize results and present them to me in an easy to consume form. What I could really use is having you save me from doing 40 hours of research across 100 websites. Food, the weather and vacations are not a problem."
M: "Ah, but there's a great new BBQ joint not too far from you"
User: "I'm vegetarian"
M: "My responses are limited. How would you like a thrilling and exciting hunting safari in Africa?"
But if it could answer this question well, it would mean that you would be out of a job pretty soon and wouldn't be casually dining in restaurants on your unemployment check anyway.
Article pitches aren't all inherently bad ideas for articles either. A good example from my industry that I'm pretty sure was from a pitch is this one from the WSJ [1]. The basic concept of regaining focus at work is a strong one that resonates with people right now, but all the blog post ends up being is an ad for the product.
My guess is that since people don't pay for subscriptions and rarely click on ads, Wired and some other publications make most of their money now from paid promotional 'journalism'. I would rather have that than nothing. Everything costs money.
This is more likely a case of Facebook "pitching" the story to Wired than Facebook paying Wired to run a paid promotional story. If you don't bite and do a story about the new Facebook thing, all of the other outlets will and you lose out on those potential readers. Because there are so many potential different outlets for where people can read about these bits of news, the PR people have the upper hand under the current views-based model.
The PA does not work for you, it works for FB. It has only FB's interest's in mind. You are not it's employer as you do not pay for it.
It's not in FB's interest to make honest recommendations. If Bob's Burgers is paying $1000/mo in FB ads, but Karen's Burgers keeps being recommended as the "good burger joint", how long before Bob stops buying ads? And why would Karen start buying FB ads since she's getting exposure for free?
>If Bob's Burgers is paying $1000/mo in FB ads, but Karen's Burgers keeps being recommended as the "good burger joint", how long before Bob stops buying ads? And why would Karen start buying FB ads since she's getting exposure for free?
How is this any different than the approach laid out by Google Search? AFAIK, Google isn't suffering in the "search ads" department.
The amount of real state that Google ads seem to take these days is offensive, it's all ads above the fold and then some, on not-too-high resolutions. Google Search is due for replacement.
There was once a search engine called Excite. Then came Google. But the old village wise man asked, "And why would Karen start buying Google ads since she's getting exposure for free?"
But then the wise man, who had much to learn, discovered that ads pay per view and click, and just like nobody reeps in a barren land, just like nobody pees against the wind, nobody advertises where the users aren't.
The users, to much a surprise, were at the website which valued quality.
The old village wise man learnt that Internet is powerful only in consensus. That what worked at his tubewell, will never work with running water. What worked at his ice factory, will not work for refrigerator.
Because Facebook will then be the "inaccurate recommendation engine" and users won't pay any attention. I don't think it's that surprising that Facebook has an incentive to work well.
It sounds ridiculous to hear someone claim originality for an idea as generic AND faddish as text based assistant. If you seriously didn't know of apps like these existed before you launched (text based assistants with a human name), just google them up and you'll find tons launched since last year. I'm sure even YC has at least 3 companies that launched with this model.
The part I find most interesting here is I'm working on something mildly similar in my spare time. Though it'll certainly be more limited than something a big company like Facebook can come up with but I'm tired to sending all my data every time I want to do something so I'm trying to squeeze this into a phone without the need for the internet to, at least, process commands. Oh and extending it will only require a little bit of JavaScript.
But I'm far away from launching it and it's only a side project. But it's cool to see so many in the space doing something I also want / wanted to do.
For those of you more familiar with NLP, are there some "dumb but effective" techniques to approach https://wit.ai/ like functionality? (Libraries would be great, but i doubt there are any, for Golang)
I know NLP is difficult, and frankly i hate doing it, but i want an expressive language to "speak" to an internal process i use (a bot), and NLP seems like the only solution. I imagine a rule based approach is best (for my simple needs), but i have yet to see any examples that come close to wit.ai.
See how annyang [1] does it. Forget the voice part (which it uses WebkitSpeech for). It's how they interpret commands that's probably useful in your case.
It's pretty good for a basic set and you can train more. Ultimately, you need something that is learning online and that will require an understanding of ML techniques such as CRFs.
Jeremy Howard[0] gave a TED talk[1] in which he predicted that this would be a short-term trend, where labelling data for AI will be an easy way to get a job for a few years. He predicts that this will drop off as enough labelled data is provided. I think this fails to consider that our expectations of AI will increase along with our ability to manipulate increasingly large amounts of data, so we will begin labelling increasingly complex data.
1. Motivated team in a larger company builds new, cool product (in this case Messenger)
2. It's good and becomes successful
3. The rest of the company wants to get in on that, think of ways to add value
4. A bunch of stuff gets bundled, some good, most bad
5. Some of the original team stay around, most get disillusioned and go work on something else
6. Eventually, the app becomes another iTunes
I want Amazon to build a personal digital assistant, and then integrate it into filters. Today I was searching for socks, I care about 3 things, the size, the color, and whether they go up to the ankle or not. It seems like information they probably have (or a well trained net could figure out), so it would be nice if it was offered as a filter.
A few weeks ago I was trying to find toys for my son. I was most interested in "things for a 6 month old". They did have that filter, but it was 0 - 24 months. At this age a few months make a HUGE difference. I wish the box was a bit more fine grained.
It appears to be a photo post (i.e. images uploaded to FB natively) as opposed to a link post, where FB chooses a thumbnail for you (or not, if it's a YouTube link). I've been seeing posts like this when friends upload image albums (if more than 4 images, you get a 'see more' box in the lower right)
I can appreciate FB trying to innovate, but with the on going privacy issues and the fact that it seems they are just repackaging existing tech, i'm just not into it.
This is basically a search engine. That is insane news for the advertising world if this is successful. Imagine FBs targeting + some intent information. I am slavering...
Even the old one isn't a proper mobius strip. In the transparent version[1] you can see that the strip twists behind the crossing making it a normal loop.
The article title has the word 'Facebook' in whereas the post just mentioned 'Messenger'. Is 'Messenger' clear enough? I'm old enough to think that refers to Microsoft Messenger!
Afaik, they haven't been shut down. It's actually even free now.
[0] not that it's a bad thing, but wondering it's more than just "facebook bought it". Funny as it is, I trust that companies will go on when facebook buys them as opposed to google or amazon.
Heck - you are right! Brought up wit.ai earlier today and it rendered a blank page - thought they had been shuttered.
But now I can see the full site and the service looks stronger than ever.
>"Can you make me dinner reservations?"
or
>"Can you help me plan my next vacation?"
I'd really love to better understand who is actually asking those types of questions in such a vague fashion, and what their use case is. When I'm picking something as simple as a restaurant, I typically want options, I want to read reviews, I want to consider distance, parking, attire, etc. While their AI/human trainers might be able to handle this level of complexity eventually, the actual phrasing of the question would likely be much more complex than "can you make me a dinner reservation." Doubly so for something like a vacation which has a lot more moving parts.
But I respect that I'm reflecting on a sample size of one...me. So I'd love to hear from others with more insight into the data around this. Are people actually searching with such generalized queries when it comes to tasks like this? Do most people not sweat the details of things like which restaurant to eat at, or where to spend hundreds or potentially thousands of dollars on a vacation?
Not trolling, serious question.