awk is really, really powerful. It is fast, and you can do a lot very efficiently just playing with the 'FS' variable. And you can find it on all *nix boxes. And it works nicely with cli other tools such as cut, paste, or datamash (http://www.gnu.org/software/datamash/).
As soon as it becomes too complex though it is better to resort to a real language (be it python, perl or whatever - my favourite is python + scipy).
I use awk and sed (with tr/sort/uniq doing some heavy lifting) for most of my data analysis work. It's a really great way to play around with data to get a feel for it before formalizing it in a different language.
For an interview, I wrote this guy to do a distributed-systems top-ten word count problem. It turned out to be much faster than anything else I wrote when combined with parallel. It's eaiser to read when split into a bash script :) [0].
time /usr/bin/parallel -Mx -j+0 -S192.168.1.3,: --block 3.5M --pipe --fifo --bg "/usr/bin/numactl -l /usr/bin/mawk -vRS='[^a-zA-Z0-9]+' '{a[tolower(\$1)]+=1} END { for(k in a) { print a[k],k} }'" < ~/A* | /usr/bin/mawk '{a[$2]+=$1} END {for(k in a) {if (a[k] > 1000) print a[k],k}}' | sort -nr | head -10
Awk is great at what it does, but I find myself unable to keep it cached in my brain long enough to reuse it. Using awk usually means a google search of how to use it, which defeats quickly working at a term.
awk is a really productive constraint one can oblige to. If a problem looks like a text processing problem, use awk, and let the idioms set in. It is an awesome tool to have on one's belt.
It is, in a sense, liberating to not have any library support for common problems: No need to learn a library (hej!) and, by the way, what you need in any given problem is an easy subset of what that library would do anyway.
Awk has made me focus more on the data there is to analyze, rather than the framework to analyze it with.
As the idiomatic use of awk is also very succinct, I hardly can imagine working efficiently on the command line without it.
I take a different stance. I never use awk, because I can use Perl. In using Perl, I also have access to the many modules that can extend it's usefulness. One of the core reasons Perl was created was to fill in where awk wasn't as useful as it could be.
I use awk only if it's a trivial one-liner. The only awk I can remember is selecting specific columns in whitespace-separated text. If it's just a single column, I'll try to use cut(1).
For anything more complicated, I'll use Python. Because if it's something that is going to have a somewhat long life, and that I would need to feed into plotting, I'd rather use Python. Because I remember the bad old days of plot(1).
This is a similar argument used by users of languages like J, K and APL. The language is small and concise - rather than learn libraries or frameworks you use the primitives and the emphasis is on the data being manipulated. Common patterns (like +/%# for average) are recognized rather than named.
AWK is a language. I always get upset when people call it a program or a tool. In a very BROAD and general sense all languages are also a program or a tool but it is first and foremost a language. Perl isn't called a tool or a program 1/100 as much as AWK. Maybe I am just petty?