Hacker News new | past | comments | ask | show | jobs | submit login

Hands down, shell scripting is one of my all time favorite languages. It gets tons of hate, e.g. "If you have to write more than 10 lines, then use a real language," but I feel like those assertions are more socially-founded opinions than technically-backed arguments.

My basic thesis is that Shell as a programming language---with it's dynamic scope, focus on line-oriented text, and pipelines---is simply a different programming paradigm than languages like Perl, Python, whatever.

Obviously, if your mental model is BASIC and you try to write Python, then you encounter lots of friction and it's easy for the latter to feel hacky, bad and ugly. To enjoy and program Python well, it's probably best to shift your mental model. The same goes for Shell.

What is the Shell paradigm? I would argue that it's line-oriented pipelines. There is a ton to unpack in that, but a huge example where I see friction is overuse of variables in scripts. Trying to stuff data inside variables, with shell's paucity of data types is a recipe for irritation. However, if you instead organize all your data in a format that's sympathetic to line-oriented processing on stdin-stdout, then shell will work with you instead of against.

/2cents




Shell and SQL make you 10x productive over any alternative. Nothing even comes close. I've seen people scrambling for 1 hours to write some data munging, then spend another 1 hour to run it through a thread pool to utilize those cores , while somebody comfortable is shell writes a parallelized one liner, rips through GBs of data, and delivers the answer in 15 minutes.

What Python is to Java, Shell is to Python. It speeds you up several times. I started using inline 'python -c' more often than the python repl now as it stores the command in shell history and it is then one fzf search away.

While neither Shell or SQL are perfect, there have been many ideas to improve them and for sure people can't wait for something new like oil shell to get production ready, getting the shell quoting hell right, or somebody fixing up SQL, bringing old ideas from Datalog and QUEL into it, fixing the goddamn NULL joins, etc.

But honestly, nothing else even comes close to this 10x productivity increase over the next best alternative. No, Thank you, I will not rewrite my 10 lines of sh into python to explode it into 50 lines of shuffling clunky objects around. I'll instead go and reread that man page how to write an if expression in bash again.


> getting the shell quoting hell right

Shameless plug coming, it this has been a pain point for me too. I found the issue with quotes (in most languages, but particularly in Bash et al) is that the same character is used to close the quote as is used to open it.m. So in my own shell I added support to use parentheses as quotes in addition to the single and double quotation ASCII symbols. This then allows you to nest quotation marks.

https://murex.rocks/docs/parser/brace-quote.html

You also don’t need to worry about quoting variables as variables are expanded to an argv[] item rather than expanded out to a command line and then any spaces converted into new argv[]s (or in layman’s terms, variables behave like you’d expect variables to behave).

https://github.com/lmorg/murex


One of my favorite Perl features that has been disappointingly under-appropriated by other languages is quoting with q(...).


This is one of my favorite features of Ruby!

Though Ruby makes it confusing AF because there are two quoting types for both strings and symbols, and they're different. (%Q %q %W %w %i) I can never remember which does which.... the letter choice feels really arbitrary.


Elixir has something like this too, but even more powerful (you can define your own):

https://elixir-lang.org/getting-started/sigils.html#strings-...


Ruby and Elixir both have features like this. Very sweet.

Elixir has sigils, which are useful for defining all kinds of literals easier, not just strings:

https://elixir-lang.org/getting-started/sigils.html#strings-...

You can also define your own. It's pretty great.


This means that you can even quote the delimiter in the string as long as it's balanced.

    $X=q( foo() )
Should work if it's balanced. If you choose a different pair like []{} then you can avoid hitting collisions. It also means that you can trivially nest quotations.

I agree that this qualified quotation is really underutilized.


Off topic. What's your opinion on Python?

I also write shell scripts, but I'm just curious what you would think about a comparison.


I’m not fan of Python, however that’s down to personal preference rather than objective fact. If Python solves a problem for other people then who am into judge :)


I noticed that I became so much more quick after taking 1 hour to properly learn awk. Yes, it literally takes about 1 hour.


Awk is awesome but saying it literally takes 1 hour to properly learn it is a bit overselling.


I really don't think so! If you have experience with any scripting, you can fully grok the fundamentals of awk in 1 hour. You might not memorize all the nuances, but you can establish the fundamentals to a degree that most things you would try to achieve would take just a few minutes of brushing up.

For those that haven't taken the time yet, I think this is a good place to start:

https://learnxinyminutes.com/docs/awk/

Of course, some people do very advanced things in awk and I absolutely agree that 1 hour of study isn't going to make you a ninja, but it's absolutely enough to learn the awk programming paradigm so that when the need arises you can quickly mobilize the solution you need.

For example: If you're quick to the draw, it can take less time to write an awk one liner to calculate the average of a column in a csv than it does to copy the csv into excel and highlight the column. It's a massive productivity booster.


Brian Kernighan covers the entire [new] awk language in 40 pages - chapter 2.

There are people who have asked me scripting questions for over a decade, who will not read this for some reason.

It could be read in an hour, but not fully retained.

https://archive.org/download/pdfy-MgN0H1joIoDVoIC7/


I feel like I do this every three years then proceed to never use it. Then I read a post on hn and think about how great it could be; rinse and repeat


yeah thats exactly right. it may only take an hour to learn, but every time i need to use awk it seems like i have to spend an hour to re-learn its goofy syntax.


alas this is true, I never correctly recall the order of particular function args as they are fairly random, still beats the alternative of having to continually internalize entire fragile ecosystems to achieve the same goal.


yeah you're definitely right. im sure if it was something i had to use more consistently i'd be able to commit it to memory. maybe...


What? The awk manual is only 827 highly technical pages[1]. If you can't read and interalize that in an hour, I suspect you're a much worse programmer than the OP.

[1] https://www.gnu.org/software/gawk/manual/gawk.html

For the sarcasm impared among us, everything above this, but possibly including this sentence is sarcasm.


Think the more relevant script equivalent of 'Everything in this statement is false.' is 'All output must return false to have true side effects.'

The quick one ~ true ~ fix was ! or #! without the 1024k copyright.

s-expression notation avoids the issue with (."contents")

MS Windows interpretation is much more terse & colorful.


Awk is an amazingly effective tool for getting things done quickly.

Submitted yesterday:

Learn to use Awk with hundreds of examples

https://github.com/learnbyexample/Command-line-text-processi...

https://news.ycombinator.com/item?id=33349930


All you need to do is learn that cmd | awk '{ $5 }' will print out the 5th word as delimited by one or more whitespace characters. Regexes support this easily but are cumbersome to write on the command line.


Doing that, maybe with some inline concatenation to make a new structure, and this are about all I use:

Printing based on another field, example gets UIDs >= 1000:

    awk -F: '$3 >= 1000 {print $0}' /etc/passwd
It can do plenty of magic, but knowing how to pull fields, concat them together, and select based on them cover like 99% of the things I hope to do with it


And don't forget the invisible $0 field in awk...


And $NF


It takes a lot less time to learn to be fairly productive with awk than with, say, vi / vim. Over time I've realized that gluing these text manipulation tools together is only an intermediate step toward learning how to write and use them in a manner that is maintainable across several generations of engineers as well as portable across many different environments, and that's still a mostly unsolved problem IMO for not just shell scripts but programming languages in general. For example, the same shell script that does something as seemingly simple as performing a sha256 checksum on macOS won't work on most Linux distributions. So in the end one winds up writing a lot of utilities all over again in yet another language for the sake of portability which ironically hurts maintainability and readability for sure because it's simply more code that can rot.


The only thing I use AWK for is getting at columns from output, (possibly processing or conditionally doing something on each) what would be the next big use-case?


I use it frequently to calculate some basic statistics on log file data.

Here's a nice example of something similar: https://drewdevault.com/dynlib


awk autonoma theory and oop the results. add unicode for extra tics!

Scripted cholmskey grammar ( https://en.wikipedia.org/wiki/Universal_grammar ) to unleash the power of regular expressions.


I have used it to extract a table to restore from a MySQL database dump.


For simple scripting tasks, yes. I have had the opposite experience for more critical software engineering tasks (as in, coding integrated over time and people).

Language aside, the ecosystem and culture do not afford enough in way of testing, dependency management, feature flags, static analysis, legibility, and so on. The reason people say to keep shell programs short is because of these problems, it needs to be possible to rewrite shell programs on a whim. At least then, you can A/B test and deploy at that scope.


awk great for things that will be used over several decades. (where hardware / OS started with nolonger exists at end of multi-decade project, but data from start to end still has to be used)


I feel like the reasons for this are:

* Shell scripts force you to think in a more scalable way (data streams)

* Shell scripts compose rich programs rather than simplistic functions

* Shells encourage you to program with a rich, extensible feature set (ad-hoc I/O redirection, files)

The only times I don’t like shell scripts are when dealing with regex and dealing with parallelism


The POSIX shell does not implement regex.

What is used both in case/esac and globbing are "shell patterns." They are also found in variable pattern removal with ${X% and ${X#.

In "The Unix Programming Environment," Kernighan and Pike apologized for these close concepts that are easily mistaken for one another.

"Regular expressions are specified by giving special meaning to certain characters, just like the asterix, etc., used by the shell. There are a few more metacharacters, and, regrettably, differences in meanings." (page 102)

Bash does implement both patterns and regex, which means discerning their difference becomes even more critical. The POSIX shell is easier in memory for this reason, and others.

http://files.catwell.info/misc/mirror/


> The only times I don’t like shell scripts are when dealing with regex and dealing with parallelism

Wow, for me parallelism is one of the best features of a unix shell and I find it vastly superior to most other programming languages.


Can you expand on the parallelism features you use and what shell? In bash I've basically given up managing background jobs because identifying and waiting for them properly is super clunky; throttling them is impossible (pool of workers) and so for that kind of thing I've had to use GNU parallel (which is its own abstruse mini-language thing and obviously nothing to do with shell). Ad-hoc but correct parallelism and first class job management was one of the things that got me to switch away from bash.


GNU parallel


It's great for embarrassingly parallel data processing, but not good for concurrent/async task.


I'd add a working knowledge of regex to that. With a decent text editor + some fairly basic regex skills you can go a long way.


> I started using inline 'python -c' more often than the python repl now as it stores the command in shell history and it is then one fzf search away.

Do you not have a ~/.python_history? The exact same search functions are available on the REPL. Ctrl-R, type your bit, bam.


Exact same - can I use fzf gistory search using Ctrl+R like I can in shell?


I've just started installing ipython on pretty much every python environment I set up on personal laptops, but there is repl history even without ipython: https://stackoverflow.com/a/7008316/1170550


I expect nushell to massively change how I work:

https://www.nushell.sh/

It's a shell that is actually built for structured data, taking lessons learned from PowerShell and others.


> getting the shell quoting hell right

Running `parallel --shellquote --shellquote --shellquote` and pasting in the line you want to quote thrice may alleviate some of the pain.

By no means ideal, though.


Python is a terrible comparison language here. Of course shell is better than Python for shell stuff; no one should suggest otherwise. Python is extremely verbose, it requires you to be precise with whitespace, and using regex has friction because it's not actually built into the language syntax (unless something has changed very recently).

The comparison should be to perl or Ruby, both of which will fare better than Python for typical shell-type tasks.


If I'm interactively composing something I do very much like pipes and shell commands, but if it's a thing I'm going to be running repeatedly then the improved maintainability of a python script, even if it does a lot of subprocess.run, is preferable to me. "Shuffling clunky objects around" seems more documented and organized than "everything is a bytestring".

But different strokes and all that.


> while somebody comfortable is shell writes a parallelized one liner, rips through GBs of data, and delivers the answer in 15 minutes.

This also works up to a point where those GBs turn into hundreds of GBs, or even PBs, and a proper distributed setup can return results in seconds.


I often find that downloading lots of data from s3 using `xargs aws sync`, and then xargs on some crunching pipeline, is much faster than a 100 core spark cluster


That's a hardware management question. The optimized binary used in my shell script still runs orders of magnitude faster and cheaper if you orchestrate 100 machines for it than any Hadoop, Spark, Beam, Snowflake, Redshift, Bigquery or what have you.

That's not to say I'd do everything in shell. Most stuff fits well into SQl, but when it comes to optimizing processing over TB or PB scale, you won't beat shell+massive hw orchestration.


usually you use specific frameworks for that, not pure Python.


I suppose the Python side is a strawman then - who would do that for a small dataset that fits on a machine? Or have I been using shell for too long :-)


I thought the above comment was about datasets that do not fit on ones machine?


As far as control-R command history searching, really enjoying McFly https://github.com/cantino/mcfly


> while somebody comfortable is shell writes a parallelized one liner

Do you have an example of this? I didn’t even know you could make sql calls in scripts.


I don’t have an example, but this article comes to mind and you may be able to find an example in it

https://adamdrake.com/command-line-tools-can-be-235x-faster-...


  PSQL="psql postgresql://$POSTGRES_USER:$POSTGRES_PASSWORD@$DATABASE_HOST:$DATABASE_PORT/$POSTGRES_DB -t -P pager=off -c "
  
  OLD="CURRENT_DATE - INTERVAL '5 years'"

  $PSQL "SELECT id from apt WHERE apt.created_on > $OLD order by apt.created_on asc;" | while 
  read -r id; do
    if [[ $id != "" ]]; then
      printf "\n\*\* Do something in the loop with id where newer than \"$OLD\" \*\*\*\n"
      # ...
    fi
  done


mysql, psql etc. let you issue sql from the command line

I don't do much sql in bash scripts but I do keep some wrapper scripts that let me run queries from stdin to databases in my environment


WASM Gawk wrapper script in a web browser with relevant information about schema grammar / template file would allow for alternate display formats beyond cli text (aka html, latex, "database report output", cvs, etc )


> Hands down, shell scripting is one of my all time favorite languages. It gets tons of hate, e.g. "If you have to write more than 10 lines, then use a real language," but I feel like those assertions are more socially-founded opinions than technically-backed arguments.

It is "opinion" based on debugging scripts made by people (which might be "you but few years ago") that don't know the full extent of death-traps that are put in the language. Or really writing anything more complex.

About only strong side of shell as a language is a pipe character. Everything else is less convenient at best, actively dangerous at worst.

Sure, "how to write something in a limited language" might be fun mental excercise but as someone sitting in ops space for good part of 15 years, it's just a burden.

Hell, I'd rather debug Perl script than Bash one...

Yeah, if it is few pipes and some minor post processing I'd use it too (pipe is the easiest way to do it out of all languages I've seen) but that's about it.

It is nice to write one-liners in cmdline but characteristic that make it nice there make it worse programming language. A bit like Perl in that matter


You say this as if it wasn't extremely common to find giant python monstrosities that can be replaced by a handful of lines of shell. TBF the shell code often is not just cleaner and easier to follow, but also faster.

It's possible to use the wrong tool for the job in any language - including language choice itself.

Dismissing a programming language because it's not shell and dismissing shell because it's not a proramming language are the same thing - a bad idea if that's your only decision criteria.


Bash is a good tool if the script is short enough, but if you have to write more than 10 lines, then use a real language.


Nonsense. That's a terrible metric.

If i need to run 11 commands in a row, suddenly i need to make sure new tooling is installed in my instance and/or ship a binary?

What if that 11 lines is setting up some networking? Now i need to go write a 400 line go program to use netlink to accomplish the same task? Or should I condense that to 80 lines of go to shell out the commands that replicate the 11 lines of simple bash?

There are plenty of reasons to do this, I have done it more than once. None of those reasons are "crossed an arbitrary magic number of 'lines of shell'".


If my bash script is more than 10 lines, I switch to python and if that's more than 10 lines I switch to C! And if that's more than 10 lines I use assembly!

/s


>However, if you instead organize all your data in a format that's sympathetic to line-oriented processing on stdin-stdout, then shell will work with you instead of against.

Not even that is necessary. Just use structured data formats like json. If you are consuming some API that is not json but still structured, use `rq` to convert it to json. Then use `jq` to slice and dice through the data.

dmenu + fzf + jq + curl is my bread and butter in shell scripts.

However, I still haven't managed to find a way to do a bunch of tasks concurrently. No, xargs and parallel don't cut it. Just give me an opinionated way to do this that is easily inspectable, loggable and debuggable. Currenly I hack together functions in a `((job_i++ < max_jobs)) || wait -n` spaghetti.


I think this comment points to an even deeper insight: shell is a crappy programming language but with amazing extensibility.

I would argue that once you pull in jq, you're no longer writing in "shell", you're writing in jq, which is a separate and different language. But that's precisely the point! Look at how effortless it is to (literally) shell out to a slew of other languages from shell.

The power of shell isn't in the scripting language itself, it's in how fluidly it lets you embed snippets of tr, sed, awk, jq, and whatever else you need.

And, critically, these languages callable from shell were not all there when shell was designed. The extension interface of spawning processes and communicating with arguments and pipes is just that powerful. That's where shell shines.


The shell is an ambiguous language that cannot be directly implemented with an LR parser.

Perhaps some of the power emerges from that ambiguity, but it is quite difficult to implement.

This presentation sums up the woes of an implementor:

https://archive.fosdem.org/2018/schedule/event/code_parsing_...


Do you have examples of concurrent use-cases that xargs and parallel don't satisfy? I discovered parallel recently and was blown away by how much it improves things. I've only really used it in basic scenarios so far, just wondering where its limitations are.


running a bash function with its own private variables in parallel. without having to export it.


How do you use dmenu for your shell script? to launch it? to prompt the user for input while it's running?

Do you have an example of a script you wrote?


Yes, for creating ad-hoc mini-UIs so the user can select an option. Same with fzf, but it's terminal-bound (rather than X-bound).

The scripts are similar to this one:

https://github.com/debxp/dmenu-scripts/blob/master/dmenu-kil...


Thanks, I will definitively use that kill one.


WASM gawk with html as user input/output more flexible.


Can you give an example of how you'd use rq in this pipeline? I'm not finding any good examples


curl -s "give.me.some/yaml" | rq --input-yaml --output-json | jq '.my.selected[1].field'


new to 'rq', it's not in active development, any other alternatives? it seems doing a lot other than convert structured data to json.


Not sure what it is doing more...I'm referring to this rq: https://github.com/dflemstr/rq#format-support-status

It converts to/from the listed formats.

There is also `jc` (written in Python) with the added benefit that it converts output of many common unix utilities to json. So you would not need to parse `ip` for example.

https://github.com/kellyjonbrazil/jc#parsers


Also look at `yq` - https://github.com/mikefarah/yq

This is a wrapper to jq that also supports yaml and other file formats.


> "If you have to write more than 10 lines, then use a real language"

I swear, there should be a HN rule against those. It pollutes every single Shell discussions, bringing nothing to them and making it hard for others do discuss the real topic.


There are three numbers in this industry: 0, 1 and infinity. Any other number - especially when stated as a rule, limitation, or law - is highly suspect.


Are you one of those people who take everything literally, so any and all jokes fly far over their heads?

This rule of ten lines or less is clearly meant as an illustrative guideline. Obviously if you have a shell script that has 11 lines, but does what it has to do reliably, nobody will be bothered.

The idea that the rule is trying to convey is "don't write long, complex programs in shell". Arguing about exact numbers or wording here is detracting from the topic at hand.


0, 1, 3 and infinity


Which works not just to preserve the previous statement from internal inconsistency, but also in regards to the incredibly useful Rule of Three (https://en.m.wikipedia.org/wiki/Rule_of_three_(computer_prog...).


> Which works not just to preserve the previous statement from internal inconsistency

It doesn't. You now have 4 numbers.


0, 1, 3, 4 and infinity - there's four numbers in this industry.

Five There's five numbers in this industry 0, 1, 3, 4, 5 and infinity

Wait, I'll come in again


0, 1, 7, and indeterminate, IME.

The 7 being for design. If there are more than 7 boxes on the whiteboard, try again.


ah, log base 2 of 7 is 127 bits (aka 8, y 1).

Unicode character can have more than 7 font boxes associated with one character box and still be a valid determinate character form.


thought the industry was broken down in 8 bit increments (0, 8, 16, 32, 64, 128, etc)

log base 2 of 4 is only 16bits


Good point. I'm not sure why I thought what I'd written above worked... shrug


Think use a real line discipline like n 8 1 would make more semantic sense than 'use a real lanaguage'.

Unless, the language is APL, in which case, 10 lines is an operating system.


The majority of those comments have significantly more thought put into them (and adhere more closely to the HN guidelines) than this comment does.


Is there a link to HN line discipline criteria? (beyond asci ranges 0 through 31)


> What is the Shell paradigm? I would argue that it's line-oriented pipelines.

Which python can do realitively well, by using the `subprocess` module.

Here is an example including a https://porkmail.org/era/unix/award (useless use of cat) finding all title lines in README.md and uppercasing them with `tr`

    import subprocess as sp
    cat = sp.Popen(
        ["cat", "README.md"],
        stdout=sp.PIPE,
    )
    grep = sp.Popen(
        ["grep", "#"],
        stdin=cat.stdout,
        stdout=sp.PIPE,
    )
    tr = sp.Popen(
        ["tr", "[:lower:]", "[:upper:]"],
        stdin=grep.stdout,
        stderr=sp.PIPE,
        stdout=sp.PIPE,
    )
    out, err = tr.communicate()
    print(out.decode("utf-8"), err.decode("utf-8"))
Is this more complicated than doing it in bash? Certainly. But on the other side of that coin its alot easier in python to do a complex regular expression (maybe depending on a command line argument) on one of those, using the result in an HTTP request via the `requests` module, packing the results into a digram rendered in PNG and sending it via email.

Yes, that is a convoluted example, but it illustrates the point I am trying to make. Everything outlined could probably done in a bash script, but I am pretty certain it would be much harder, and much more difficult to maintain, than doing this in python.

Bash is absolutely fine up to a point. And with enough effort, bash can do extremely complex things. But as soon as things get more complex than standard unix tools, I rather give up on the comfort of having specialiced syntax for pipes and filehandles, and write a few more lines handling those, if that means that I can do the more complex stuff easily using the rich module ecosystem of Python.


> But on the other side of that coin its alot easier in python to do a complex regular expression

I am not sure I would agree. Sed fills this role quite nicely.

cat README.md | grep # | tr '[:lower:] [:upper:]' | sed 's/something/something_else/'


Now do that again, but this time the regular expression is controlled by 2 command line params, one which gives it the substitution, the other one is a boolean switch that tells it whether to ignore case. And the script has to give a good error if the substitution isn't a valid regular expression. It should also give me a helptext for its command line options if I ask it with `-h, --h`.

In python I can use `opt/argparse`, and use the error output from `re.compile` to do this.

Of course this is also possible in bash, but how easy is it to code in comparison, and how maintainable is the result?


Man, you chose the wrong username, didn't you? ;-)


Not really, I love bash. I also love perl and vimscript btw. :D


In the example I gave I wouldn't write that in a script file, so I would just alter the command itself.

If I wanted to parse cli args I would use case on the input to mux out the args. I personally prefer writing cli interfaces this way (when using a scripting language).

    while test $# -gt 0; do  
      case "$1" in  
        -f|--flag) shift; FLAG="$1";;  
      esac  
      shift  
    done


grep+tr can be done within sed too (or go with perl for more features and easier portability)


one tool/command per 'concept' was a resource saving thing at one time.

sed is the thing that handles shell regular expressions for shellscripts.


> But on the other side of that coin its alot easier in python to do a complex regular expression (maybe depending on a command line argument) on one of those, using the result in an HTTP request via the `requests` module, packing the results into a digram rendered in PNG and sending it via email.

Doesn't sound so bad. A quick argument parser, a call out to grep or sed, pipe to curl, then to graphviz I guess (I don't really know much about image generation tools though), then compose the mail with a heredoc and run sendmail. Sounds like 10 to 15 lines for a quick and dirty solution.


It's certainly possible, but here comes the fun; How read/maintain/extend-able is the solution? How well does it handle errors, assist the user? Add checking if all the programs are installed and useful error messages into the mix. Then the API does a tiny change and now we need a `jq` between curl and graphviz, and maybe we'd need an option for that case as well, and so on, and so on, ...

Bash scripts have a nasty tendency to grow, sometimes in ways that are disproportional to the bit of extra functionality that is suddenly required. Very quickly, a small quick'n dirty solution can blow up to a compost-heap ... no less dirty, but now instead of a clean-wipe, I'd need a shovel to get through it.

I think my handle speaks for itself as to how much I like bash. But I have had the pleasure of getting handed over bash scripts, hundreds of lines long, with the error description being "it no longer works, could you have a look at it? and the original author both unreachable and apparently having string feelings against comments.

And in many of these cases, it took me less time to code a clean solution in Python or Go, than it took me to grok what the hell that script was actually doing.


shell was originally tied to job/programm processing.


I would agree, with the caveat that Bourne Shell isn't really a programming language, and has to be seen as such to be loved.

Bourne Shell Scripting is literally a bunch of weird backwards compatible hacks around the first command line prompt from 1970. The intent was to preserve the experience of a human at a command prompt, and add extra functionality for automation.

It's basically a high-powered user interface. It emphasizes what the operator wants for productivity, instead of the designer in her CS ivory tower of perfection. You can be insanely productive on a single line, or paste that line into a file for repeatability. So many programmers fail to grasp that programming adds considerations that the power user doesn't care about. The Shell abstracts away all that unnecessary stuff and just lets you get simple things done quickly.


Hard Disagree. Bash programming:

- no standard unit testing

- how do you debug except with printlns? Fail.

- each line usually takes a minimum of 10 minutes to debug unless you've done bash scripting for... ten years

- basic constructs like the arg array are broken once you have special chars and spaces and want to pass those args to other commands. and UNICODE? Ha.

- standard library is nil, you're dependent on a hodgepodge of possibly installed programs

- there is no dependency resolution or auto-install of those programs or libraries or shell scripts. since it is so dependent on binary programs, that's a good thing, but also sucks for bash programmers

- horrid rules on type conversions, horrid syntax, space-significant rules

- as TFA shows, basic error checking and other conventions is horrid, yeah I want a crap 20 line header for everything

- effective bash is a bag of tricks. Bag of tricks programming is shit. You need to do ANYTHING in it for parsing, etc? Copy paste in functions is basically the solution.

- I'm not going to say interpreter errors are worse than C++ errors, but it's certainly not anything good.

Honestly since even effing JAVA added a hashbang ability, I no longer need bash.

Go ahead, write some bash autocompletion scripts in bash. Lord is that awful. Try writing something with a complex options / argument interface and detect/parse errors in the command line. Awful.

Bash is basically software engineering from the 1970s, oh yeah, except take away the word "engineering". Because the language is actively opposed to anything that "engineering" would entail.


> - basic constructs like the arg array are broken once you have special chars and spaces and want to pass those args to other commands. and UNICODE? Ha.

Any example with this? The following works reasonably well for me.

  args=(-a --b 'arg with space' "一 二 三")
  someprog "${args[@]}"


> - how do you debug except with printlns? Fail.

With Trace. Which is talked about in TFA.

By the way nobody use exclusively bash. When i worked for a cloud provider, it was basically 30% python(ansible), 30% perl, 5 to 10% bash, and a bit of other languages depending on the client needs (mostly java, but also Julia and R).


There are workloads where shell scripts are the so-called right tool for a job. All too often I see people writing scripts in "proper" languages and calling os.system() on every other line. Shell scripts are good for gluing programs together. It's fine to use them for that.


For me it's once you make switch to a "proper" language you realize how much lifting pipelines do when it comes to chaining external binaries together.


Heaping things together is better than letting things stack up/down.


1000% THIS. The trick, of course, is knowing when it's time to abandon shell for something more powerful, but that usually comes with experience.


I wrote such a program, that runs other programs for heavy lifting but also parses text which you can't possibly do in bash.


bootloader, systemd, or init ?

Parsing text isn't anything fancy.

It's just knowing what the marker is for a word/item boundary.

For bash, that marker is defined in IFS


A build system for single file programs.


Eh, this is true but I dont think its because of the programming model of bash. I feel like this is conflating the *nix ecosystem with bash. If every programming language was configured by default and had access to standard unix tools with idiomatic bindings, Shell's advantages would be greatly reduced. You still get a scripting language with some neat tricks but I don't think I would reach for it nearly as often if other things were an option.

And sure sure you can call any process from a language but the assumptions are different. No one wants to call a Java jar that has a dependency on the jq CLI app being available.


This has been tried repeatedly - language idiomatic bindings tend to be clunky compared to (e.g.) a simple | pipeline or a couple of <() io redirections.

Shell is a tool that turns out to be pretty good for some things, particularly composing functionality out of other programs and also doing system configuration/tuning stuff to tailor an environment for other programs. It's also really handy for automating tasks you find yourself repeating.

Programming languages are a tool that are pretty good for other things - making new programs, tricky logic, making the most (or at least more than a shell script launching 1000s of new processes) efficient use of a computer.

Trying to replace one with the other is not really useful - they have different jobs. Learning to use them in conjunction on the other hand... there's a lot of power in that.

By comparison - javascript and html. They don't replace each other - yet they are both computer languages used in the same domain, and both have strengths and weaknesses. They have different jobs. And when you use them in conjunction you get something pretty darn powerful.


I also like Bash - it's a powerful language, especially when combined with a rich ecosystem of external commands that can make your life easier, e.g. GNU Parallel.

Handling binary data can also work in Bash, provided that you just use it as a glue for pipelines between other programs (e.g. feeding video data into ffmpeg).

One time, while working on some computer vision project, I had a need to hack up a video-capture-and-upload program for gathering training data during a certain time of day. It took me about 20 minutes and 50 lines of Bash to setup the whole thing, test it, and be sure it works.


To add to this, it's designed to work in conjunction with small programs. You don't write everything using bash (or whatever shell) built-ins. It will feel like a crappier Perl. If there is some part of your script where you're struggling to use an existing tool (f.g. built-ins, system utils), write your own small program to handle that part of the stream and add it in to your pipe. Since shell is a REPL, you get instant feedback and you'll know if it's working properly.

It's also important to learn your system's environment too. This is your "standard library", and it's why POSIX compatibility is important. You will feel shell is limited if you don't learn how to use the system utilities with shell (or if your target system has common utilities missing).

As an example of flexibility, you can use shell and system utilities in combination with CGI and a basic web server to send and receive text messages on an Android phone with termux. Similar to a KDE Connect or Apple's iMessage.


> I feel like those assertions are more socially-founded opinions than technically-backed arguments

You think the complaints about rickety, unintuitive syntax are "socially founded"? I can't think of another language that has so many pointless syntax issues every time I revisit it. I haven't seen a line of Scheme in over a decade, and I'm still fairly sure I could write a simple if condition with less likelihood of getting it wrong than Bash.

I came at it from the other end, writing complex shell scripts for years because of the intuition that python would be overkill. But there was a moment when I realized how irrational this was: shell languages are enough of a garbage fire that Python was trivially the better choice for my scripts the minute flow control enters the picture.


> with it's dynamic scope

Bash has dynamic scope with its local variables.

The standard POSIX language has only global variables: one pervasive scope.


Line-oriented pipelines are great and have their place but I'm still sticking to a high-level general purpose programming language (lets abbreviate this as HGPPL) for scripts longer than 10 lines, because the following reasons:

* I like to the HGPPL data structures and convenient library for manipulating them (in my case this is Clojure which has a great core library). Bash has indexed and associative arrays.

* Libraries for common data formats are also used in a consistent way in the HGPPL. I don't have to remember a DSL for every data format - i.e. how to use jq when dealing with JSON. Similarly for YAML, XML, CSVs, I can also do templating for configuration files for nginx and so on. I've seen way too many naive attempts to piece together valid YAML from strings in bash to know its just not worth doing.

* I don't want to switch programming language from the main application and I find helps "break down silos" when everyone can read and contribute to some code. If a team is just sysadmins - sure, make bash the official language and stick to it.

* I can write scripts without repeating myself using namespaces and higher-order functions, which my choice of paradigm for abstractions, others write cleanly with classes. You can follow best practices, avoid the use of ENV vars, but that requires extra discipline and it is hard to enforce on other for the type of places where bash is used.


Also the fact that $() invokes a supparser which lets use double quotes in an already double quoted expression is something I miss when using Python-f strings.


> My basic thesis is that Shell as a programming language---with it's dynamic scope, focus on line-oriented text, and pipelines---is simply a different programming paradigm than languages like Perl, Python, whatever.

This argument is essentially the same as "dynamic typing is just a different programming paradigm than static typing, and not intrinsically better or worse" - but to an even greater extent, because bash isn't really typed at all.

To those who think that static (and optional/gradual) typing brings strong benefits with little downsides over dynamic typing and becomes increasingly important as the size of a program increases, bash is simply unacceptable for any non-trivial program.

Other people (like yourself) that think that static typing isn't that important and "it's just a matter of preference" will be fine with an untyped language like bash.

Unfortunately, it's really hard to find concrete, clear evidence that one typing paradigm is better than the other, so we can't really make a good argument for one or the other using science.

However, I can say that you're conflating different traits of shell languages here. You say "dynamic scope, focus on line-oriented text, and pipelines" - but each of those are very different, and you're missing the most contested one (typing). Shell's untypedness is probably the biggest complaint about it, and the line-oriented text paradigm is really contentious, but most people don't care very much about the scoping, and lots of people like the pipelines feature.

A shell language that was statically-typed, with clear scoping rules, non-cryptic syntax, structured data, and pipelines would likely be popular and relatively non-controversial.


Eh, as soon as you have to deal with arrays and hash tables/dicts or something like JSON, bash becomes very painful and hard to read.


I mean they're not that bad.

    declare -A mydict( [lookma]=initalization )
    mydict[foo]=bar
    echo "${mydict[foo]}"

    list=()
    list+=(foo bar baz)
    echo "${list[0]}"


Now do an associative array containing another associative array.


Easy.

  declare -A outer=(
    [inner]="_inner"
  )
  declare -A _inner=(
    [key]="value"
  )
Access inner elements via a nameref.

  declare -n inner="${outer[inner]}"
  echo "${inner[key]}"
  # value
Currently writing a compiler in Bash built largely on this premise.


That seems really inconvenient to be honest.


Flatten the damn thing and process it relationally. Linear data scans and copying are so fast on modern hardware that it doesn't matter. It's counterintuitive for people to learn that flattened nested structure with massive duplication still processes faster than that deeply nested beast because you have to chase pointers all over the place. Unfortunately that's what people learn at java schools and they get stuck with that pointer chasing paradigm for the rest of their careers.


Then what I need is tuple on bash


Sometimes a you just have to accept a language's limitations.

Try in Python to make a nested defaultdict you can access like the following.

    d = <something>
    d["a"]["b"]["c"]  # --> 42
Can't be done because it's impossible for user code to detect what the last __getitem__ call is and return the default.

Edit: Dang it, I mean arbitrary depth.


    c = defaultdict(lambda: 42)
    b = defaultdict(lambda: c)
    a = defaultdict(lambda: b)
    a["a"]["b"]["c"]  # --> 42


Okay fair, I deserve that. I assumed it was obvious I meant arbitrary depth.

Also d["a"] and d["a"]["b"] aren't 42.


If d["a"]["b"] is 42, then how could d["a"]["b"]["c"] also be 42? What you want doesn't make sense semantically. Normally, we'd expect these two statements to be equivalent

d["a"]["b"]["c"] == (d["a"]["b"])["c"]


I mean you got it but it's something a lot of people want. The semantic reason for it is so you can look up an arbitrary path on a dict and if it's not present get a default, usually None. It can be done by catching KeyError but it has to happen on the caller side which is annoying. I can't make a real nested mapping that returns none if the keys aren't there.

    d = magicdict()
    is42 = d["foo"]["bar"]["baz"]
      # -> You can read any path and get a default if it doesn't exist.

    d["hello"]["world"] = 420 
      # -> You can set any path and d will then contain { "hello": { "world": 420 }
People use things like jmespath to do this but the fundamental issue is that __getitem__ isn't None safe when you want nested dicts. It's a godsend when dealing with JSON.

I feel like we're maybe too in the weeds, I should have just said "now have two expressions in your lambda."


What languages allow such a construct? It seems like it would be super confusing if these two code samples produced different values:

    # One
    a = d["a"]["b"]["c"]
    
    # Two
    a = d["a"]["b"]
    b = a["c"]


The MagicMock class from unittest package does what you want.

I have a hard time understanding any use case outside of such mocking.


In this case you're chaining discreet lookup operations where it sounds like you really want a composite key. You could easily implement this if you accepted the syntax of it as d["a.b.c"] or d["a", "b", "c"] or d.query("a", "b", "c")

Otherwise I'm not sure of a mainstream language that would let you do a.get(x).get(y) == 42 but a.get(x).get(y).get(z) == 42, unless you resorted to monkey patching the number type, as it implies 42.get(z) == 42, which seems.. silly


Kindred spirit. I particularly love variable variables and exploit them often. Some would call it abuse I guess.


The biggest Issue is that error handling is completely broken in POSIX shell scripting (including Bash). Even errexit doesn't work as any normal language would implement it (One could say it is broken by design).

So if you don't care about error cases everything is fine, but if you do, it gets ugly really fast. And that is the reason why other languages are probably be better suited if you want to write something bigger that 10 lines.

However, I have to admit, I don't follow that advice myself...


> The biggest Issue is that error handling is completely broken in POSIX shell scripting (including Bash). Even errexit doesn't work as any normal language would implement it (One could say it is broken by design).

I guess you're referring to http://mywiki.wooledge.org/BashFAQ/105. Got recently hit by these as well.


Yes and my personal favorite: Functions can behave differently depending on, if they are being called from a conditional expression vs. from a normal context. Errexit has no effect if the function is called from a conditional expression.


I sometimes regret I never learned to "really" write shell scripts. I stumbled across Perl early on, and for anything more complex than canned command invocation(s) or a simple loop, I usually go for Perl.

There is something to be said in favor of the shell being always available, but Perl is almost always available. FreeBSD does not have it base of the base system, but OpenBSD does, and most Linux distros do, too.

But it is fun to connect a couple of simple commands via pipes and create something surprisingly complex. I don't do it all the time, but it happens.


As someone who has used a lot of shell over my career, I do love it as a utility and a programming paradigm.

However the biggest issues I've had is that the code is really hard to test, error handling in shell isn't robust, and reusability with library type methods is not easy to organize or debug.

Those are deal breakers for me when it comes to building any kind of non trivial system.


Shell scripting also inspired some choices (especially syntax) of the Toit language (toitlang.org).

Clearly, it's for a different purpose, and there are some things that wouldn't work in a general-purpose language that isn't as focused on line-based string processing, but we are really happy with the things we took from bash.


Aye.. I've been saying for years that shell scripting is how I meditate, and I'm only mostly joking

Shell quoting though, Aieeee...

I find I have to shift gears quite substantially moving from shell or powershell to anything else...

"I'll just pipe the output of this function into.. oh, right"


I've written a lot of shell scripts. I have my own best practices that work for me. I don't like it one bit. I mean, it's enjoyable to write shell scripts, it's just not enjoyable to deal with them long-term.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: