Standard grep is much faster on multi-gigabyte files than anything you can figure out how to do in your pet language. By the time you get close to matching grep, you would have reimplemented most of grep, in half-assed fashion at that.
Your delusion is assuming standard command line tools are simple in function because they have a simple interface that Average Joe can use.
ag is often faster when you're using it interactively, replacing "grep -r" (in particular in version controlled dirs). It's also faster in the sense that for interactive use it will often DWYM.
But has too many weird quirks that it can replace grep for data munging. E.g.
$ ag verb fullform_nn.txt >/tmp/verbs
ERR: Too many matches in fullform_nn.txt. Skipping the rest of this file.
Man ag says there's a --max-count option. Let's try that.
$ grep -c verb fullform_nn.txt
206077
$ ag --max-count 206077 verb fullform_nn.txt >/tmp/verbs
ERR: Too many matches in fullform_nn.txt. Skipping the rest of this file.
Wtf? (and running those two commands with "time" gave ag user 0m0.770s while grep had user 0m0.057s)
I have never used ag, but in most instances where people thought they made a faster grep it is becuase it doesn't handle multibyte encodings correctly.
Standard grep is much faster on multi-gigabyte files than anything you can figure out how to do in your pet language. By the time you get close to matching grep, you would have reimplemented most of grep, in half-assed fashion at that.
Your delusion is assuming standard command line tools are simple in function because they have a simple interface that Average Joe can use.