And here's the equivalent Ruby program. In Ruby it is usually 'collect' and 'inject' instead of 'map' and 'reduce'.
result = Dir.glob('test?.txt').collect do |file_name|
File.new(file_name, 'r').read.split(' ').collect do |word|
word.downcase.tr '.,\'', ''
end.inject Hash.new(0) do |hash,word|
hash[word] += 1
hash
end
end.inject do |all,hash|
(all.keys + hash.keys).uniq.inject Hash.new(0) do |acc,word|
acc[word] = all[word] + hash[word]
acc
end
end
p result
Edit: The Python example in the article is better because it merges hashes in the reduce step which facilitates parallelisation.
If i understand correctly collect is more like map from functional programing languages and inject is a fold. One maps a collection of one type to a collection of another type, and the other reduces a collection of one type into something of another type (which could be a collection itself) in that case MapReduce does not quite translate to ruby's collect and inject. In fancy language the reduce in MapReduce is not merely a catamorphism nor is the map actually a collect. The types do not align. I learned of his from a hackernews post by grav1tas http://news.ycombinator.com/item?id=2477238. In there he links to a paper: http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.104....
If you look at the types implicit in his code you should see that the essence lies not with collect or inject which are incidental plumbing but the nature of the hash and in particular, the act of generating the intermediate collection for each key. What is called map is actually more a reverse reduce. The pedagogical emphasis on map and foldr actually belies the true nature and power of algorithm in parallelizing.
If you really want to try and play with MapReduce on a dataset without going through setting up a Hadoop node (or 4), check out CouchDB. It's designed around MapReduces (though not distributed), and you even get to deal with solving re-reduce problems.
I wrote a little shim that allowed to write me to write Hadoop jobs in Clojure, and had two small test functions that would apply a map / reduce to a test file -- it made development of Hadoop jobs a bit easier. See: https://github.com/brool/hadoop-shim/blob/master/wordcount.c...