This isn't some sort of philosophy debate that you're trying to make it out to be. The output of a lot of tools quite simply isn't in a machine-friendly format, and it can be a nightmare to try to write a parser for them yourselves.
You are misinterpreting the Unix philosophy. It's fine to use a bunch of sed, awk, grep, etc. when you're either transforming text or processing already well-structured data. But trying to write a full-fledged parser for something with only human-readable output, especially as a shell script, definitely goes against that philosophy. Congratulations, you've managed to piece together 50 commands in a pipeline and create a monstrosity that's far from the minimalist philosophy.
In fact, I would argue that by using `jc` together with `jq` you can actually create some nice pipelines for parsing the data that will be much more in line with the Unix philosophy.
Nobody ever said this was designed to improve performance, but I have a hard time believing your claims about it being significantly slower which is not backed up by any source. Most likely, eliminating the JSON conversion would be at most an unnecessary micro-optimization. But if your code was truly performance-critical, you wouldn't be piecing it together with shell pipelines that cause a bunch of unnecessary forks, you'd write it in something like C instead.
And the "JSON was designed for the web browser" argument doesn't hold much water either. You're about several decades too late for that, JSON is extremely ubuquitous and used in a lot of non-browser contexts. Sure, some people depending on their needs may use other formats like XML or protobuf, but JSON is still very common.
> These programs do not expect large amounts of memory and many are written with the intent that they may be used to process text line-by-line
Which is only a problem if you are being very silly, don't choose NDJSON (newline-delimited JSON) and instead shove 10GB of data in a big [] array that the parser has to read in all at once. Almost every single JSON library can do NDJSON already. One of the most heavily used JSON-over-stdio applications is the Language Server Protocol, which uses JSON-RPC 2.0 and is entirely NDJSON. Same for about 15 different log-yeeting tools. Nobody has ever suggested switching LSP to plain text for performance reasons, only lower-overhead binary formats that don't throw out everything gained by having structure at all.
Large memory use by JSON is not something inherent to the encoding that plain text is somehow immune to. All sorts of CLI programs read stdin in all at once, and you don't see plain text getting slammed for exorbitant memory use.
In the context of the original post, `jc` etc, we're talking about essentially a constant sized output that's just much easier to parse, so the complaint is not relevant to those at all.
> But if your code was truly performance-critical, you wouldn't be piecing it together with shell pipelines that cause a bunch of unnecessary forks, you'd write it in something like C instead.
You are underestimating the power of unix tools. A chain of unix tools can match or exceed the performance of C programs written by average programmers. That is a true beauty of unix and partly why it is still relevant today. The author has little idea about performance and doesn't understand how unix works; otherwise he wouldn't make arrogant claims like:
> With jc, we can make the linux world a better place until the OS and GNU tools join us in the 21βst century!
You are misinterpreting the Unix philosophy. It's fine to use a bunch of sed, awk, grep, etc. when you're either transforming text or processing already well-structured data. But trying to write a full-fledged parser for something with only human-readable output, especially as a shell script, definitely goes against that philosophy. Congratulations, you've managed to piece together 50 commands in a pipeline and create a monstrosity that's far from the minimalist philosophy.
In fact, I would argue that by using `jc` together with `jq` you can actually create some nice pipelines for parsing the data that will be much more in line with the Unix philosophy.
Nobody ever said this was designed to improve performance, but I have a hard time believing your claims about it being significantly slower which is not backed up by any source. Most likely, eliminating the JSON conversion would be at most an unnecessary micro-optimization. But if your code was truly performance-critical, you wouldn't be piecing it together with shell pipelines that cause a bunch of unnecessary forks, you'd write it in something like C instead.
And the "JSON was designed for the web browser" argument doesn't hold much water either. You're about several decades too late for that, JSON is extremely ubuquitous and used in a lot of non-browser contexts. Sure, some people depending on their needs may use other formats like XML or protobuf, but JSON is still very common.