It is fast, robust, and frequently far more performant than a lot of modern tools that can be overkill for most data manipulation. I use it all the time in our ETL processes and it always works as advertised.
I use it for a couple reasons: one, it is installed as a base app on almost every single *nix implementation on the planet, so you can count on having it even on ancient or restrictive environments (which I work in frequently); Two, awk is frequently fast enough for most needs, and generally far faster than a number of off the shelf "modern" tools. The first reason is the one that generally leads me to its use, its ubiquity and power make it a compelling tool.
I use Perl similarly to awk if I need to use regex rather than white space delimited fields.
I think if you know Perl really well and can remember the command line arguments - particularly -E, -n, -I and -p - then it’s a good swap in substitute for grep, sed, awk, cut, bash, etc when whatever 5 min task you’re working on gets a tiny bit more complex.
Similarly a decent version perl 5 seems to be installed everywhere by default.
I’m curious to know if anyone would say the same about python or any other programs? I’m not particularly strong in short python scripting.
I would say Perl’s native support for regular expressions makes it more useful on the CLI than Python, but Python is also very low on my preferred languages list.
I do, however, use it for JSON pretty printing in a pipeline: python -mjson.tool IIRC.