I would think translation to other languages would be trivial. The model is just a map of word to vector. Every word is converted to it's vector representation; then the query word is compared to the input words using cosine similarity.
I assumed you meant computer languages :-) If you mean human languages, yes Google publishes word2vec embeddings in many different human languages. Not sure though how easy it is to download.