for those who are frequently searching paths with a large number of files stored on an SSD or ramdisk/tmpfs, the bottleneck is very much the CPU time.
in these cases ag is noticeably faster (orders of magnitude, in some cases), especially if you're just searching a literal and not with a regex pattern.
the author has done some really cool performance hacks, and written some great blog posts[1] along the way.
I was a bit surprised by a simple test I just ran on a Linux 3.10 tree:
time rgrep aes . > /dev/null
real 0m0.900s
user 0m0.548s
sys 0m0.340s
# Somewhat similar output to ag:
time rgrep -n --color aes . > /dev/null
real 0m1.177s
user 0m0.876s
sys 0m0.288s
time ag aes > /dev/null
real 0m1.147s
user 0m1.040s
sys 0m0.548s
# Using fixed strings in grep, limiting us to searching c-files
time rgrep -n --color -F --include='*.c' aes . > /dev/null
real 0m0.936s
user 0m0.720s
sys 0m0.208s
time ag -G \.c aes > /dev/null
real 0m1.130s
user 0m1.140s
sys 0m0.428s
This is on an encrypted volume sitting on top of a low end SSD, all runs
with hot chache. The times here are from ag in Debian -- I tried a build
from upstream git -- but with essentially the same time.
I guess ag might make sense under OS X -- but there doesn't appear to be
any (speed) advantages under GNU/Linux.
Thanks, I was looking for this as well. ag --help could probably be reworked a little bit; --include makes more sense than -G / --file-search-regex in my opinion.
You can limit the files it searches with -G <pattern>