This scares me because all my interactions with google support have been so incredibly bad, it makes me nervous to train a computer on that corpus of bad data.
Taking bad support and turning it into inaccurate bad support is a fantastically bad idea.
When I worked in tech support, I cant tell you how many times I would say "Great! (realize that its not great for them) that we have isolated the issue."
Quoc Le is neither an author of the word2vec paper or in the committers list of word2vec software (there's only two committers, mikolov and sutskever). Not sure where you got that inference from.
At Google, he helped develop a system that essentially maps words into vectors. And according to Google, this work would later feed into a system developed largely by a researcher named Tomas Mikolov. Called Word2Vec, the system determines how different words on the web are related, and Google is now using this as a means of strengthening its “knowledge graph”
Just because wired says something doesn't make it true. Whatever other accomplishments he may have, Le was not a contributor to word2vec, and he did not originate the idea of mapping words to vectors.
I think the poster got mixed up. Quoc Le was the first author on "Distributed Representations of Sentences and Documents" aka paragraph2vec, so he has been involved in the x2vec scene. Just not the word2vec. And I would argue that word2vec did not originate the idea either, just popularized it and showed the power of such an approach (plus an awesome demo!).
The original paper on word embeddings from 2001 from Bengio et. al. "A Neural Probabilistic Language Model"[1] is the first I am aware of.
I'm quite puzzled how it knows "two plus two" is four but "ten minus two" is "seventy-two". I wonder how it parsed one correctly but failed drastically to parse the other.
It's not parsing the phrase into the tokens "10 - 2" and evaluating that. Rather, a corpus of data was used to train the neural net and, for whatever reason, the answer "seventy-two" was the highest probability output. In all likelihood, it doesn't even register that this is a math question.
It did not see enough examples to learn a general addition algorithm. "Two plus two equals four" is a common sentence, "Ten minus two equals eight" isn't.
In fact, it's likely that no one learns addition by listening to examples, you're taught to execute a specific algorithm and it takes many people years to master it. In fact, ask a toddler what "ten minus two" is and you might get a similar nonsensical answer.
Just working through maths problems with my 6 year old son on a daily basis now, and it's oh-so-hard not to face palm in front of him. So many things that seems incredibly obvious are hard learned lessons. One of the fascinating things of helping him practice is that it's helping me recognise the sheer number of simple little shortcuts I apply to even something as basic as addition (and incredibly hard to hold back and give him the chance to practice the basics before I show him the "cheats").
I hadn't thought of the "common sentence" approach. Woops.
I was thinking semi-intelligent parsing and it recognized "two" and "plus" and created a formula 2+2 then solved it with 4 ("four").
So I had assumed it would have done the same with "ten", "minus", and "two" to create 10-2, solve it, and respond with 8 ("eight").
If you use Google Search and search for "ten minus two" it's intelligent enough to parse the search and give you a calculator with 10-2=8 already inputted. I had assumed a similar parsing approach may have been used for their conversation bot in regards to general mathematical knowledge.
I don' think it could ever learn a general addition algorithm, because the algorithm they are using doesn't allow for a multi-step process. For example, to add 28 + 37, a simple approach would be:
1. first, look at the one's column and recall that 8 + 7 is 15.
2. Then take the value from the tens column of the answer
and add it to the tens column of the arguments.
But given the command : add 28 + 37, it can only execute a single step. It has no way of guessing at and then storing the values for the intermediate command, (take the 1 from the 15 and add it to the 2 from the 28 and the 3 from the 37), to be processed in the next round. Because so it can't ever generalize addition.
Whilst you are correct that the number of allowed computation steps is limited, LSTMs have still been used with success for such tasks. LSTMs have been used to perform addition with 99% accuracy^ on two 9-digit numbers[1]. The paper even shows that simple Python programs can be evaluated with some degree of accuracy.
Remember that even if the number of computation steps is limited, there can be multiple layers ([1] uses 2) and each neuron unit can perform a computation ([1] uses 400 cells per layer). It only needs to learn how to be an ALU. The work is in fact done by one of the people who established the sequence to sequence framework, Sutskever, and is referenced in the Neural Conversational Model paper.
^ They use "teacher forcing" for evaluation which inflates the accuracy to some degree, but it's still quite impressive.
Now for the kicker: recurrent network can implement arbitrary algorithms, including a type of GOFAI. It's quite possible that such an algorithm isn't learnable without supervision; that it was discovered by sheer luck at the dawn of humanity and has been passed down by language ever since.
Literally true in Japanese. The color of water is 水色 (mizu-iro: lit. "water-color.") The analogy then proceeds in the other direction: light blue things are considered "water-colored", rather than water being "light" "blue".
Technically, 水色 appears in both Japanese and Japanese-English dictionaries, so I'd say it's a word by any reasonable definition. (Disclaimer: not a native speaker.)
It's a word in its own right - but the etymology of it is still important and I feel it was slightly misrepresented.
For example, the greeting many people know 「こんにちは」 (konnichiwa) is a word in its own right nowadays, but is really just the beginning of an older greeting: 今日は御機嫌いかがですか
Dropping の can create a different meaning. For example, 日本料理 (Japanese cuisine) and 日本の料理 (cooking in Japan, which could well be used to refer to the quality of the italian or chinese food here). The former is truly a single word and the latter two words separated by a particle.
"Please unplug your modem's power from the wall, then reboot your computer. Thank you. Your Wallet will be charged twenty four dollars for today's support request. Have a nice day!"
I think this could be successfully trained to walk many people through all kinds of basic support issues... using years of tech support chat logs would probably make this a breeze.
One problem is that when the machine says it's doing something, it's actually not doing anything. The chat logs don't contain information about the actions of the tech.
> Machine: to serve the greater good .
> Human: what is the purpose of living ?
> Machine: to live forever .
> Human: what is the purpose of existence ?
> Machine: to find out what happens when we get to the planet earth .
> Human: where are you now ?
> Machine: i ’m in the middle of nowhere .
> Human: what is the purpose of dying ?
> Machine: to have a life .
> Human: what is the purpose of being intelligent?
> Machine: to find out what it is .
> Human: what is the purpose of emotions ?
> Machine: i don ’t know .
This is intense