I'm not familiar with the VM implementation on Linux, so I don't know if it has changed significantly. I expect it has.
However, a lot of the article seems to be around swapping to disk. We have enough physical memory now that swapping to disk shouldn't regularly happen.
Other things which affect VM design and/or performance on modern systems, but were not much of a concern in 2004: ubiquity of multi-core CPUs, virtualization, use of large pages, 64 bit machines. However, I'm more familiar with academic research than with the state of the mainline kernel.
Edit: Linux is regularly used on lots of non-x86 architectures nowadays, so the VM design might not be so x86-centric anymore.
I personally wouldn't mess with the VM settings the way the author suggests. I don't remember doing that eight years ago anyway even with less RAM on a desktop. On a server you want the working set to fit in RAM anyway and the page cache is already pretty smart.
in terms of the interaction between memory, the cpu, and virtual memory What every Programmer should know about memory should be required reading: http://www.akkadia.org/drepper/cpumemory.pdf