Hands down, shell scripting is one of my all time favorite languages. It gets tons of hate, e.g. "If you have to write more than 10 lines, then use a real language," but I feel like those assertions are more socially-founded opinions than technically-backed arguments.
My basic thesis is that Shell as a programming language---with it's dynamic scope, focus on line-oriented text, and pipelines---is simply a different programming paradigm than languages like Perl, Python, whatever.
Obviously, if your mental model is BASIC and you try to write Python, then you encounter lots of friction and it's easy for the latter to feel hacky, bad and ugly. To enjoy and program Python well, it's probably best to shift your mental model. The same goes for Shell.
What is the Shell paradigm? I would argue that it's line-oriented pipelines. There is a ton to unpack in that, but a huge example where I see friction is overuse of variables in scripts. Trying to stuff data inside variables, with shell's paucity of data types is a recipe for irritation. However, if you instead organize all your data in a format that's sympathetic to line-oriented processing on stdin-stdout, then shell will work with you instead of against.
Shell and SQL make you 10x productive over any alternative.
Nothing even comes close. I've seen people scrambling for 1 hours to write some data munging, then spend another 1 hour to run it through a thread pool to utilize those cores , while somebody comfortable is shell writes a parallelized one liner, rips through GBs of data, and delivers the answer in 15 minutes.
What Python is to Java, Shell is to Python. It speeds you up several times. I started using inline 'python -c' more often than the python repl now as it stores the command in shell history and it is then one fzf search away.
While neither Shell or SQL are perfect, there have been many ideas to improve them and for sure people can't wait for something new like oil shell to get production ready, getting the shell quoting hell right, or somebody fixing up SQL, bringing old ideas from Datalog and QUEL into it, fixing the goddamn NULL joins, etc.
But honestly, nothing else even comes close to this 10x productivity increase over the next best alternative. No, Thank you, I will not rewrite my 10 lines of sh into python to explode it into 50 lines of shuffling clunky objects around. I'll instead go and reread that man page how to write an if expression in bash again.
Shameless plug coming, it this has been a pain point for me too. I found the issue with quotes (in most languages, but particularly in Bash et al) is that the same character is used to close the quote as is used to open it.m. So in my own shell I added support to use parentheses as quotes in addition to the single and double quotation ASCII symbols. This then allows you to nest quotation marks.
You also don’t need to worry about quoting variables as variables are expanded to an argv[] item rather than expanded out to a command line and then any spaces converted into new argv[]s (or in layman’s terms, variables behave like you’d expect variables to behave).
Though Ruby makes it confusing AF because there are two quoting types for both strings and symbols, and they're different. (%Q %q %W %w %i) I can never remember which does which.... the letter choice feels really arbitrary.
This means that you can even quote the delimiter in the string as long as it's balanced.
$X=q( foo() )
Should work if it's balanced. If you choose a different pair like []{} then you can avoid hitting collisions. It also means that you can trivially nest quotations.
I agree that this qualified quotation is really underutilized.
I’m not fan of Python, however that’s down to personal preference rather than objective fact. If Python solves a problem for other people then who am into judge :)
I really don't think so! If you have experience with any scripting, you can fully grok the fundamentals of awk in 1 hour. You might not memorize all the nuances, but you can establish the fundamentals to a degree that most things you would try to achieve would take just a few minutes of brushing up.
For those that haven't taken the time yet, I think this is a good place to start:
Of course, some people do very advanced things in awk and I absolutely agree that 1 hour of study isn't going to make you a ninja, but it's absolutely enough to learn the awk programming paradigm so that when the need arises you can quickly mobilize the solution you need.
For example: If you're quick to the draw, it can take less time to write an awk one liner to calculate the average of a column in a csv than it does to copy the csv into excel and highlight the column. It's a massive productivity booster.
yeah thats exactly right. it may only take an hour to learn, but every time i need to use awk it seems like i have to spend an hour to re-learn its goofy syntax.
alas this is true,
I never correctly recall the order of particular function args as they are
fairly random, still beats the alternative of having to continually internalize entire fragile ecosystems to achieve the same goal.
What? The awk manual is only 827 highly technical pages[1]. If you can't read and interalize that in an hour, I suspect you're a much worse programmer than the OP.
All you need to do is learn that cmd | awk '{ $5 }' will print out the 5th word as delimited by one or more whitespace characters. Regexes support this easily but are cumbersome to write on the command line.
Doing that, maybe with some inline concatenation to make a new structure, and this are about all I use:
Printing based on another field, example gets UIDs >= 1000:
awk -F: '$3 >= 1000 {print $0}' /etc/passwd
It can do plenty of magic, but knowing how to pull fields, concat them together, and select based on them cover like 99% of the things I hope to do with it
It takes a lot less time to learn to be fairly productive with awk than with, say, vi / vim. Over time I've realized that gluing these text manipulation tools together is only an intermediate step toward learning how to write and use them in a manner that is maintainable across several generations of engineers as well as portable across many different environments, and that's still a mostly unsolved problem IMO for not just shell scripts but programming languages in general. For example, the same shell script that does something as seemingly simple as performing a sha256 checksum on macOS won't work on most Linux distributions. So in the end one winds up writing a lot of utilities all over again in yet another language for the sake of portability which ironically hurts maintainability and readability for sure because it's simply more code that can rot.
The only thing I use AWK for is getting at columns from output, (possibly processing or conditionally doing something on each) what would be the next big use-case?
For simple scripting tasks, yes. I have had the opposite experience for more critical software engineering tasks (as in, coding integrated over time and people).
Language aside, the ecosystem and culture do not afford enough in way of testing, dependency management, feature flags, static analysis, legibility, and so on. The reason people say to keep shell programs short is because of these problems, it needs to be possible to rewrite shell programs on a whim. At least then, you can A/B test and deploy at that scope.
awk great for things that will be used over several decades.
(where hardware / OS started with nolonger exists at end of multi-decade project, but data from start to end still has to be used)
What is used both in case/esac and globbing are "shell patterns." They are also found in variable pattern removal with ${X% and ${X#.
In "The Unix Programming Environment," Kernighan and Pike apologized for these close concepts that are easily mistaken for one another.
"Regular expressions are specified by giving special meaning to certain characters, just like the asterix, etc., used by the shell. There are a few more metacharacters, and, regrettably, differences in meanings." (page 102)
Bash does implement both patterns and regex, which means discerning their difference becomes even more critical. The POSIX shell is easier in memory for this reason, and others.
Can you expand on the parallelism features you use and what shell? In bash I've basically given up managing background jobs because identifying and waiting for them properly is super clunky; throttling them is impossible (pool of workers) and so for that kind of thing I've had to use GNU parallel (which is its own abstruse mini-language thing and obviously nothing to do with shell). Ad-hoc but correct parallelism and first class job management was one of the things that got me to switch away from bash.
I've just started installing ipython on pretty much every python environment I set up on personal laptops, but there is repl history even without ipython: https://stackoverflow.com/a/7008316/1170550
Python is a terrible comparison language here. Of course shell is better than Python for shell stuff; no one should suggest otherwise. Python is extremely verbose, it requires you to be precise with whitespace, and using regex has friction because it's not actually built into the language syntax (unless something has changed very recently).
The comparison should be to perl or Ruby, both of which will fare better than Python for typical shell-type tasks.
If I'm interactively composing something I do very much like pipes and shell commands, but if it's a thing I'm going to be running repeatedly then the improved maintainability of a python script, even if it does a lot of subprocess.run, is preferable to me. "Shuffling clunky objects around" seems more documented and organized than "everything is a bytestring".
I often find that downloading lots of data from s3 using `xargs aws sync`, and then xargs on some crunching pipeline, is much faster than a 100 core spark cluster
That's a hardware management question. The optimized binary used in my shell script still runs orders of magnitude faster and cheaper if you orchestrate 100 machines for it than any Hadoop, Spark, Beam, Snowflake, Redshift, Bigquery or what have you.
That's not to say I'd do everything in shell. Most stuff fits well into SQl, but when it comes to optimizing processing over TB or PB scale, you won't beat shell+massive hw orchestration.
I suppose the Python side is a strawman then - who would do that for a small dataset that fits on a machine? Or have I been using shell for too long :-)
PSQL="psql postgresql://$POSTGRES_USER:$POSTGRES_PASSWORD@$DATABASE_HOST:$DATABASE_PORT/$POSTGRES_DB -t -P pager=off -c "
OLD="CURRENT_DATE - INTERVAL '5 years'"
$PSQL "SELECT id from apt WHERE apt.created_on > $OLD order by apt.created_on asc;" | while
read -r id; do
if [[ $id != "" ]]; then
printf "\n\*\* Do something in the loop with id where newer than \"$OLD\" \*\*\*\n"
# ...
fi
done
WASM Gawk wrapper script in a web browser with relevant information about schema grammar / template file would allow for alternate display formats beyond cli text (aka html, latex, "database report output", cvs, etc )
> Hands down, shell scripting is one of my all time favorite languages. It gets tons of hate, e.g. "If you have to write more than 10 lines, then use a real language," but I feel like those assertions are more socially-founded opinions than technically-backed arguments.
It is "opinion" based on debugging scripts made by people (which might be "you but few years ago") that don't know the full extent of death-traps that are put in the language. Or really writing anything more complex.
About only strong side of shell as a language is a pipe character. Everything else is less convenient at best, actively dangerous at worst.
Sure, "how to write something in a limited language" might be fun mental excercise but as someone sitting in ops space for good part of 15 years, it's just a burden.
Hell, I'd rather debug Perl script than Bash one...
Yeah, if it is few pipes and some minor post processing I'd use it too (pipe is the easiest way to do it out of all languages I've seen) but that's about it.
It is nice to write one-liners in cmdline but characteristic that make it nice there make it worse programming language. A bit like Perl in that matter
You say this as if it wasn't extremely common to find giant python monstrosities that can be replaced by a handful of lines of shell. TBF the shell code often is not just cleaner and easier to follow, but also faster.
It's possible to use the wrong tool for the job in any language - including language choice itself.
Dismissing a programming language because it's not shell and dismissing shell because it's not a proramming language are the same thing - a bad idea if that's your only decision criteria.
If i need to run 11 commands in a row, suddenly i need to make sure new tooling is installed in my instance and/or ship a binary?
What if that 11 lines is setting up some networking? Now i need to go write a 400 line go program to use netlink to accomplish the same task? Or should I condense that to 80 lines of go to shell out the commands that replicate the 11 lines of simple bash?
There are plenty of reasons to do this, I have done it more than once. None of those reasons are "crossed an arbitrary magic number of 'lines of shell'".
If my bash script is more than 10 lines, I switch to python and if that's more than 10 lines I switch to C! And if that's more than 10 lines I use assembly!
>However, if you instead organize all your data in a format that's sympathetic to line-oriented processing on stdin-stdout, then shell will work with you instead of against.
Not even that is necessary. Just use structured data formats like json. If you are consuming some API that is not json but still structured, use `rq` to convert it to json. Then use `jq` to slice and dice through the data.
dmenu + fzf + jq + curl is my bread and butter in shell scripts.
However, I still haven't managed to find a way to do a bunch of tasks concurrently. No, xargs and parallel don't cut it. Just give me an opinionated way to do this that is easily inspectable, loggable and debuggable. Currenly I hack together functions in a `((job_i++ < max_jobs)) || wait -n` spaghetti.
I think this comment points to an even deeper insight: shell is a crappy programming language but with amazing extensibility.
I would argue that once you pull in jq, you're no longer writing in "shell", you're writing in jq, which is a separate and different language. But that's precisely the point! Look at how effortless it is to (literally) shell out to a slew of other languages from shell.
The power of shell isn't in the scripting language itself, it's in how fluidly it lets you embed snippets of tr, sed, awk, jq, and whatever else you need.
And, critically, these languages callable from shell were not all there when shell was designed. The extension interface of spawning processes and communicating with arguments and pipes is just that powerful. That's where shell shines.
Do you have examples of concurrent use-cases that xargs and parallel don't satisfy? I discovered parallel recently and was blown away by how much it improves things. I've only really used it in basic scenarios so far, just wondering where its limitations are.
There is also `jc` (written in Python) with the added benefit that it converts output of many common unix utilities to json. So you would not need to parse `ip` for example.
> "If you have to write more than 10 lines, then use a real language"
I swear, there should be a HN rule against those. It pollutes every single Shell discussions, bringing nothing to them and making it hard for others do discuss the real topic.
There are three numbers in this industry: 0, 1 and infinity. Any other number - especially when stated as a rule, limitation, or law - is highly suspect.
Are you one of those people who take everything literally, so any and all jokes fly far over their heads?
This rule of ten lines or less is clearly meant as an illustrative guideline. Obviously if you have a shell script that has 11 lines, but does what it has to do reliably, nobody will be bothered.
The idea that the rule is trying to convey is "don't write long, complex programs in shell". Arguing about exact numbers or wording here is detracting from the topic at hand.
> What is the Shell paradigm? I would argue that it's line-oriented pipelines.
Which python can do realitively well, by using the `subprocess` module.
Here is an example including a https://porkmail.org/era/unix/award (useless use of cat) finding all title lines in README.md and uppercasing them with `tr`
Is this more complicated than doing it in bash? Certainly. But on the other side of that coin its alot easier in python to do a complex regular expression (maybe depending on a command line argument) on one of those, using the result in an HTTP request via the `requests` module, packing the results into a digram rendered in PNG and sending it via email.
Yes, that is a convoluted example, but it illustrates the point I am trying to make. Everything outlined could probably done in a bash script, but I am pretty certain it would be much harder, and much more difficult to maintain, than doing this in python.
Bash is absolutely fine up to a point. And with enough effort, bash can do extremely complex things. But as soon as things get more complex than standard unix tools, I rather give up on the comfort of having specialiced syntax for pipes and filehandles, and write a few more lines handling those, if that means that I can do the more complex stuff easily using the rich module ecosystem of Python.
Now do that again, but this time the regular expression is controlled by 2 command line params, one which gives it the substitution, the other one is a boolean switch that tells it whether to ignore case. And the script has to give a good error if the substitution isn't a valid regular expression. It should also give me a helptext for its command line options if I ask it with `-h, --h`.
In python I can use `opt/argparse`, and use the error output from `re.compile` to do this.
Of course this is also possible in bash, but how easy is it to code in comparison, and how maintainable is the result?
In the example I gave I wouldn't write that in a script file, so I would just alter the command itself.
If I wanted to parse cli args I would use case on the input to mux out the args. I personally prefer writing cli interfaces this way (when using a scripting language).
while test $# -gt 0; do
case "$1" in
-f|--flag) shift; FLAG="$1";;
esac
shift
done
> But on the other side of that coin its alot easier in python to do a complex regular expression (maybe depending on a command line argument) on one of those, using the result in an HTTP request via the `requests` module, packing the results into a digram rendered in PNG and sending it via email.
Doesn't sound so bad. A quick argument parser, a call out to grep or sed, pipe to curl, then to graphviz I guess (I don't really know much about image generation tools though), then compose the mail with a heredoc and run sendmail. Sounds like 10 to 15 lines for a quick and dirty solution.
It's certainly possible, but here comes the fun; How read/maintain/extend-able is the solution? How well does it handle errors, assist the user? Add checking if all the programs are installed and useful error messages into the mix. Then the API does a tiny change and now we need a `jq` between curl and graphviz, and maybe we'd need an option for that case as well, and so on, and so on, ...
Bash scripts have a nasty tendency to grow, sometimes in ways that are disproportional to the bit of extra functionality that is suddenly required. Very quickly, a small quick'n dirty solution can blow up to a compost-heap ... no less dirty, but now instead of a clean-wipe, I'd need a shovel to get through it.
I think my handle speaks for itself as to how much I like bash. But I have had the pleasure of getting handed over bash scripts, hundreds of lines long, with the error description being "it no longer works, could you have a look at it? and the original author both unreachable and apparently having string feelings against comments.
And in many of these cases, it took me less time to code a clean solution in Python or Go, than it took me to grok what the hell that script was actually doing.
I would agree, with the caveat that Bourne Shell isn't really a programming language, and has to be seen as such to be loved.
Bourne Shell Scripting is literally a bunch of weird backwards compatible hacks around the first command line prompt from 1970. The intent was to preserve the experience of a human at a command prompt, and add extra functionality for automation.
It's basically a high-powered user interface. It emphasizes what the operator wants for productivity, instead of the designer in her CS ivory tower of perfection. You can be insanely productive on a single line, or paste that line into a file for repeatability. So many programmers fail to grasp that programming adds considerations that the power user doesn't care about. The Shell abstracts away all that unnecessary stuff and just lets you get simple things done quickly.
- each line usually takes a minimum of 10 minutes to debug unless you've done bash scripting for... ten years
- basic constructs like the arg array are broken once you have special chars and spaces and want to pass those args to other commands. and UNICODE? Ha.
- standard library is nil, you're dependent on a hodgepodge of possibly installed programs
- there is no dependency resolution or auto-install of those programs or libraries or shell scripts. since it is so dependent on binary programs, that's a good thing, but also sucks for bash programmers
- horrid rules on type conversions, horrid syntax, space-significant rules
- as TFA shows, basic error checking and other conventions is horrid, yeah I want a crap 20 line header for everything
- effective bash is a bag of tricks. Bag of tricks programming is shit. You need to do ANYTHING in it for parsing, etc? Copy paste in functions is basically the solution.
- I'm not going to say interpreter errors are worse than C++ errors, but it's certainly not anything good.
Honestly since even effing JAVA added a hashbang ability, I no longer need bash.
Go ahead, write some bash autocompletion scripts in bash. Lord is that awful. Try writing something with a complex options / argument interface and detect/parse errors in the command line. Awful.
Bash is basically software engineering from the 1970s, oh yeah, except take away the word "engineering". Because the language is actively opposed to anything that "engineering" would entail.
> - basic constructs like the arg array are broken once you have special chars and spaces and want to pass those args to other commands. and UNICODE? Ha.
Any example with this? The following works reasonably well for me.
args=(-a --b 'arg with space' "一 二 三")
someprog "${args[@]}"
By the way nobody use exclusively bash. When i worked for a cloud provider, it was basically 30% python(ansible), 30% perl, 5 to 10% bash, and a bit of other languages depending on the client needs (mostly java, but also Julia and R).
There are workloads where shell scripts are the so-called right tool for a job. All too often I see people writing scripts in "proper" languages and calling os.system() on every other line. Shell scripts are good for gluing programs together. It's fine to use them for that.
For me it's once you make switch to a "proper" language you realize how much lifting pipelines do when it comes to chaining external binaries together.
Eh, this is true but I dont think its because of the programming model of bash. I feel like this is conflating the *nix ecosystem with bash. If every programming language was configured by default and had access to standard unix tools with idiomatic bindings, Shell's advantages would be greatly reduced. You still get a scripting language with some neat tricks but I don't think I would reach for it nearly as often if other things were an option.
And sure sure you can call any process from a language but the assumptions are different. No one wants to call a Java jar that has a dependency on the jq CLI app being available.
This has been tried repeatedly - language idiomatic bindings tend to be clunky compared to (e.g.) a simple | pipeline or a couple of <() io redirections.
Shell is a tool that turns out to be pretty good for some things, particularly composing functionality out of other programs and also doing system configuration/tuning stuff to tailor an environment for other programs. It's also really handy for automating tasks you find yourself repeating.
Programming languages are a tool that are pretty good for other things - making new programs, tricky logic, making the most (or at least more than a shell script launching 1000s of new processes) efficient use of a computer.
Trying to replace one with the other is not really useful - they have different jobs. Learning to use them in conjunction on the other hand... there's a lot of power in that.
By comparison - javascript and html. They don't replace each other - yet they are both computer languages used in the same domain, and both have strengths and weaknesses. They have different jobs. And when you use them in conjunction you get something pretty darn powerful.
I also like Bash - it's a powerful language, especially when combined with a rich ecosystem of external commands that can make your life easier, e.g. GNU Parallel.
Handling binary data can also work in Bash, provided that you just use it as a glue for pipelines between other programs (e.g. feeding video data into ffmpeg).
One time, while working on some computer vision project, I had a need to hack up a video-capture-and-upload program for gathering training data during a certain time of day. It took me about 20 minutes and 50 lines of Bash to setup the whole thing, test it, and be sure it works.
To add to this, it's designed to work in conjunction with small programs. You don't write everything using bash (or whatever shell) built-ins. It will feel like a crappier Perl. If there is some part of your script where you're struggling to use an existing tool (f.g. built-ins, system utils), write your own small program to handle that part of the stream and add it in to your pipe. Since shell is a REPL, you get instant feedback and you'll know if it's working properly.
It's also important to learn your system's environment too. This is your "standard library", and it's why POSIX compatibility is important. You will feel shell is limited if you don't learn how to use the system utilities with shell (or if your target system has common utilities missing).
As an example of flexibility, you can use shell and system utilities in combination with CGI and a basic web server to send and receive text messages on an Android phone with termux. Similar to a KDE Connect or Apple's iMessage.
> I feel like those assertions are more socially-founded opinions than technically-backed arguments
You think the complaints about rickety, unintuitive syntax are "socially founded"? I can't think of another language that has so many pointless syntax issues every time I revisit it. I haven't seen a line of Scheme in over a decade, and I'm still fairly sure I could write a simple if condition with less likelihood of getting it wrong than Bash.
I came at it from the other end, writing complex shell scripts for years because of the intuition that python would be overkill. But there was a moment when I realized how irrational this was: shell languages are enough of a garbage fire that Python was trivially the better choice for my scripts the minute flow control enters the picture.
Line-oriented pipelines are great and have their place but I'm still sticking to a high-level general purpose programming language (lets abbreviate this as HGPPL) for scripts longer than 10 lines, because the following reasons:
* I like to the HGPPL data structures and convenient library for manipulating them (in my case this is Clojure which has a great core library). Bash has indexed and associative arrays.
* Libraries for common data formats are also used in a consistent way in the HGPPL. I don't have to remember a DSL for every data format - i.e. how to use jq when dealing with JSON. Similarly for YAML, XML, CSVs, I can also do templating for configuration files for nginx and so on. I've seen way too many naive attempts to piece together valid YAML from strings in bash to know its just not worth doing.
* I don't want to switch programming language from the main application and I find helps "break down silos" when everyone can read and contribute to some code. If a team is just sysadmins - sure, make bash the official language and stick to it.
* I can write scripts without repeating myself using namespaces and higher-order functions, which my choice of paradigm for abstractions, others write cleanly with classes. You can follow best practices, avoid the use of ENV vars, but that requires extra discipline and it is hard to enforce on other for the type of places where bash is used.
Also the fact that $() invokes a supparser which lets use double quotes in an already double quoted expression is something I miss when using Python-f strings.
> My basic thesis is that Shell as a programming language---with it's dynamic scope, focus on line-oriented text, and pipelines---is simply a different programming paradigm than languages like Perl, Python, whatever.
This argument is essentially the same as "dynamic typing is just a different programming paradigm than static typing, and not intrinsically better or worse" - but to an even greater extent, because bash isn't really typed at all.
To those who think that static (and optional/gradual) typing brings strong benefits with little downsides over dynamic typing and becomes increasingly important as the size of a program increases, bash is simply unacceptable for any non-trivial program.
Other people (like yourself) that think that static typing isn't that important and "it's just a matter of preference" will be fine with an untyped language like bash.
Unfortunately, it's really hard to find concrete, clear evidence that one typing paradigm is better than the other, so we can't really make a good argument for one or the other using science.
However, I can say that you're conflating different traits of shell languages here. You say "dynamic scope, focus on line-oriented text, and pipelines" - but each of those are very different, and you're missing the most contested one (typing). Shell's untypedness is probably the biggest complaint about it, and the line-oriented text paradigm is really contentious, but most people don't care very much about the scoping, and lots of people like the pipelines feature.
A shell language that was statically-typed, with clear scoping rules, non-cryptic syntax, structured data, and pipelines would likely be popular and relatively non-controversial.
Flatten the damn thing and process it relationally. Linear data scans and copying are so fast on modern hardware that it doesn't matter. It's counterintuitive for people to learn that flattened nested structure with massive duplication still processes faster than that deeply nested beast because you have to chase pointers all over the place. Unfortunately that's what people learn at java schools and they get stuck with that pointer chasing paradigm for the rest of their careers.
If d["a"]["b"] is 42, then how could d["a"]["b"]["c"] also be 42? What you want doesn't make sense semantically. Normally, we'd expect these two statements to be equivalent
I mean you got it but it's something a lot of people want. The semantic reason for it is so you can look up an arbitrary path on a dict and if it's not present get a default, usually None. It can be done by catching KeyError but it has to happen on the caller side which is annoying. I can't make a real nested mapping that returns none if the keys aren't there.
d = magicdict()
is42 = d["foo"]["bar"]["baz"]
# -> You can read any path and get a default if it doesn't exist.
d["hello"]["world"] = 420
# -> You can set any path and d will then contain { "hello": { "world": 420 }
People use things like jmespath to do this but the fundamental issue is that __getitem__ isn't None safe when you want nested dicts. It's a godsend when dealing with JSON.
I feel like we're maybe too in the weeds, I should have just said "now have two expressions in your lambda."
In this case you're chaining discreet lookup operations where it sounds like you really want a composite key. You could easily implement this if you accepted the syntax of it as d["a.b.c"] or d["a", "b", "c"] or d.query("a", "b", "c")
Otherwise I'm not sure of a mainstream language that would let you do a.get(x).get(y) == 42 but a.get(x).get(y).get(z) == 42, unless you resorted to monkey patching the number type, as it implies 42.get(z) == 42, which seems.. silly
The biggest Issue is that error handling is completely broken in POSIX shell scripting (including Bash). Even errexit doesn't work as any normal language would implement it (One could say it is broken by design).
So if you don't care about error cases everything is fine, but if you do, it gets ugly really fast. And that is the reason why other languages are probably be better suited if you want to write something bigger that 10 lines.
However, I have to admit, I don't follow that advice myself...
> The biggest Issue is that error handling is completely broken in POSIX shell scripting (including Bash). Even errexit doesn't work as any normal language would implement it (One could say it is broken by design).
Yes and my personal favorite: Functions can behave differently depending on, if they are being called from a conditional expression vs. from a normal context. Errexit has no effect if the function is called from a conditional expression.
I sometimes regret I never learned to "really" write shell scripts. I stumbled across Perl early on, and for anything more complex than canned command invocation(s) or a simple loop, I usually go for Perl.
There is something to be said in favor of the shell being always available, but Perl is almost always available. FreeBSD does not have it base of the base system, but OpenBSD does, and most Linux distros do, too.
But it is fun to connect a couple of simple commands via pipes and create something surprisingly complex. I don't do it all the time, but it happens.
As someone who has used a lot of shell over my career, I do love it as a utility and a programming paradigm.
However the biggest issues I've had is that the code is really hard to test, error handling in shell isn't robust, and reusability with library type methods is not easy to organize or debug.
Those are deal breakers for me when it comes to building any kind of non trivial system.
Shell scripting also inspired some choices (especially syntax) of the Toit language (toitlang.org).
Clearly, it's for a different purpose, and there are some things that wouldn't work in a general-purpose language that isn't as focused on line-based string processing, but we are really happy with the things we took from bash.
I've written a lot of shell scripts. I have my own best practices that work for me. I don't like it one bit. I mean, it's enjoyable to write shell scripts, it's just not enjoyable to deal with them long-term.
My basic thesis is that Shell as a programming language---with it's dynamic scope, focus on line-oriented text, and pipelines---is simply a different programming paradigm than languages like Perl, Python, whatever.
Obviously, if your mental model is BASIC and you try to write Python, then you encounter lots of friction and it's easy for the latter to feel hacky, bad and ugly. To enjoy and program Python well, it's probably best to shift your mental model. The same goes for Shell.
What is the Shell paradigm? I would argue that it's line-oriented pipelines. There is a ton to unpack in that, but a huge example where I see friction is overuse of variables in scripts. Trying to stuff data inside variables, with shell's paucity of data types is a recipe for irritation. However, if you instead organize all your data in a format that's sympathetic to line-oriented processing on stdin-stdout, then shell will work with you instead of against.
/2cents