It is lovely to see someone sharing software work they did just because they find it subject interesting and fun, not to make a buck or pad a portfolio. This has some good hacker vibes, harder and harder to find these days. Bravo.
“A process cannot be understood by stopping it. Understanding must move with the flow of the process, must join it and flow with it.” - Frank Herbert in his intro to distributed systems programming class
All the best to this project but since new cool shells keep coming up in the frontpage here, I am wondering if anybody is using an alternative shell as a daily driver and getting considerably more out of it?
Is fish still considered alternative? The completions and suggestions still make it my go to. Thankfully it’s also started to bring its syntax inline with bash (e.g., by accepting `&&` in addition to `and;`)
I have also been using fish for years. But seeing how nice the scripting is that is built into Dune makes me want Dune features in fish. Or for Dune to get the autocomplete that fish has.
> if anybody is using an alternative shell as a daily driver and getting considerably more out of it?
I wrote my own. Originally motivated by the fact that bash doesn't handle `$VARWITHSPACES` correctly (if I want expansion, I can write `$@VARWITHSPACES`), but I get a surprising amount of mileage out of random features shoved into corners of the substitution syntax like:
(`$q''` produces a string, `$@m/regex/pat/` finds matches in it and expands `"pat"` for each with $0,$1,... set to captures, and $[expr] evaluates match expressions.)
I've been using murex as my primary shell for years.
Why do I use it? Because Bash (et al) syntax sucked for solving common modern day problems (like iterating through anything that isn't just a list of words, like the way variables are expanded in a dumb way (resulting in problems like spaces in file names breaking scripts), like how working with tables, json or other structured data requires inlining another language (aws, sed, jq, Perl, etc). and so on and so forth)
So I created my own shell to fix these problems
And while I was at it, I took influences from IDE to build out a better interactive terminal too.
(I should probably write a blog post about some of the design decisions behind my shell)
I've been using Xonsh for the past 3 years. I really can't go back Bash.
I write lots of small scripts and utilities for myself, perhaps on average one per week. Random stuff, like: slicing json data, renecoding home videos with ffmpeg, querying APIs with requests module, loops, dictionary comprehensions, etc...
If some script needs to be deployed to production, it's fairly easy to rewrite it in plain Python.
I've resisted installing it on our production systems, but I'm thinking more and more of using XXH for this instead.
Because if the main bulk of your problem is getting solved with external executables, like the ffmpeg example given, then it makes more sense to call them from a scripting language that is designed around forking other processes. Python be far more powerful than your average shell but it sucks for writing shell scripts.
The thing is, bash kinda sucks at managing subprocesses the moment you start doing anything even slightly more interesting than "wait for the single launched subprocess/the last process in the pipe" (in fact, all UNIXes suck at it because of their fragile PID management but let's talk about the shell in particular). For example, imagine you want to run two processes in parallel and wait until both of them end; and you also want to be able to press Ctrl+C and interrupt them both. The cleanest way I could've find after much googling is
(
trap 'kill 0' SIGINT
worker A &
worker B &
wait
)
It's a very delicate pattern because, for example, the use of subshell here is critical: without it SIGINT doesn't get delivered to the "worker" processes.
One would think such a useful workflow ought to have a better built-in support but apparently not: people reinvent it all the time with $! and manual PID files and those things are very unreliable.
That’s a very specific use case you have there though. I can’t think of any occasion when I’ve actually wanted to do that in the 20 years I’ve been a sysadmin for *nix. The few occasions I have had multiple daemons I’ve wanted to start and have the ability to terminate it made more sense to create an init file (of varying formats over the years) or Docker container. Managing multiple processes with a single signal is a bit of a UNIX anti-pattern and thus creating a sub-shell that is parent to both processes does feel more more idiomatic to how UNIX (never mind shells) should operate. But even that feels wrong in terms of how processes should be managed. Hence the init/ Docker suggestions.
As for PIDs files, I have no love for them either. But in fairness, their role isn’t to manage a persistent shell but rather to manage a persistent service being queried from a non-persistent shell session. In a way, they’re like a RESTful API before REST was a thing. So they are not designed to fit the role you’re describing of a persistent shell session managing two long running but not persistent processes.
> That’s a very specific use case you have there though
Upload two large files to two different machines in parallel, starting at roughly the same time to compare the throughput. Or "run in parallel and measure the difference" scenario. Or parallelizing any, esp. network/distributed, work a-la make/xargs. Heck, init used to do exactly this: run a getty for each tty in parallel, indefinitely restarting them.
Sure, you can do that from two/three/four/... different xterms, as well as stopping them manually and that's what I usually do but... that's tedious and trivial stuff, perfect for automating if you can automate it, that is.
As for services/demons I agree completely; in fact my other gripe about the UNIX process management is how easy it is to break out of a process group. Thankfully, docker gives you confidence that when you stop your service, no runaway (great-great-...-grand-)child process will survive. In olden days, however, lots of things insisted on daemonizing themselves and fighting against any ways to control or even observe them: breaking from the process groups, de-attaching from the terminal sessions, double- and triple-forking, closing all file descriptors to defeat the self-pipe trick, etc. Ugh. If you really need some child process to outlive you (do you really? please consider again), then the only way to do that ought to be "service start" or "at now" or some other kind of "ask someone above you in the process hierarchy to launch that process".
I guess it depends on what you mean by 'considerably more'. I use Fish because you can quickly change options and view info in the config GUI, and I like its completions. I still script in Bash though.
It's probably worth noting that Dune is already the name of the OCaml build system.
Other than that, this is a really interesting and creative project. I'm curious what it will be like to use it in a "real" session, I might have to download it and try it.
Does this support pipes? It's not clear to me what runs in the same process and what forks a new one. Is there a way to run some shell code in a sub-process (parentheses in conventional shell)? Asynchronously (& in conventional shell)?
Currently, I use abduco for 'isolated persistent sessions' and then if I need a layout on top of that, resume the abduco instances inside panes in a tmux layout.
I could quite possibly replace tmux with dvtm but I'd been doing screen then screen+dtach then tmux+dtach before that so tmux+abduco was an incremental improvement that kept most of my workflow intact.