Is that 300B Apple's global revenue? If so, I'd imagine there's your answer. Prior Apple fancy accounting tricks had all revenue funneled through the friendliest tax haven until they got called on it.
I'd imagine though, only US revenue gets claimed in CA.
I'm also perplexed about this. Anyone's car could be stolen, and then used to commit any number of crimes. Maybe the woman in the article didn't report the car as stolen?
maybe, because it was her own son and she didn't want him to face more jailtime? I thought it didn't matter though, I thought the owner could choose not to prosecute, or tell them to drop the case.. or maybe that's on a state-by-state level.
Word2vec is usually the standard neural word embeddings implementation. There are other algorithms as well such as glove[1], document embeddings[2] and backpropagation based methods[3]. Facebook just came out with a paper recently that beat word2vec as well[4].
Neural word embeddings are a neat way of representing concepts. I see a great future for automated feature engineering with text (joining audio and images) in deep learning.
It's my first time seeing the package, but looking over the docs it looks like it implements LSA. The major difference here is that word2vec dramatically outperforms LSA in a variety of tasks (http://datascience.stackexchange.com/questions/678/what-are-...). My experience has been that the vector representations in LSA can be underwhelming and poorly performant. I can't comment on the Random Projection and Reflective Random Indexing techniques SemanticVectors implements.
Sorry, I should have specifically mentioned how it differs from random indexing/projection. I was immediately reminded of a similar inference example using random indexing/projection.
I'd imagine though, only US revenue gets claimed in CA.