Hacker News new | past | comments | ask | show | jobs | submit login
Alternative shell with native support for JSON, YAML, TOML, CSV, etc. (github.com/lmorg)
181 points by hnlmorg on Feb 27, 2021 | hide | past | favorite | 88 comments



Every month a cool shell is posted on HN and I get all excited.

Then I remember I need to work on hundreds of corporate servers, half of which I cannot install anything on. So the only common denominator shell is bash.

So bash it is.


Author here:

I would be flattered if people found this invaluable enough to install on fleets of servers. But pragmatically that's not the problem I'm trying to solve. I think managing remote systems is largely a solved problem with tools like Ansible (or which there are many competitors) and the few occasions you want to drop into a shell on a remote machine are occasions when you want the least resistance. So Bash is a sensible choice. But the problems Sysadmins/DevOps/Developers face these days is that a lot of tooling starts out being ran on local machines before being pushed out to CI/CD, Kubernetes, ELK stacks, Github/Gitlab, to all these other wonderful enterprise tooling I've neglected to mention. And having a shell that glues all these local machine executables together can still give big productively gains even if it does diverge from Bash.

So that's the inspiration behind this shell. It's not there to replace Bash on the server but it is designed to make engineers lives easier when working locally with CLI tooling like Docker, git, Terraform, JSON log files, and so on and so forth.

Of course, you're free to use / not use this in any way you want :)


> I think managing remote systems is largely a solved problem with tools like Ansible...

Leaving a server in the dark and remotely prodding it via Salt/Ansible/Chef/Puppet makes sense in a very narrow and very limited scenario set.

Even if you manage all your systems like that (I generally install a fleet via XCAT and manage some of them via Salt), you probably need to login to that server and poke around to see what went wrong with a user's job or why that server is acting wonky.

OTOH, I applaud you for all the work you did and hope this shell makes lots of people's lives much easier.

Keep up the good work, good luck. :)


That’s where tools like Greylog, fluentd, and tools like Prometheus or “full service” tools like New Relic et al come into play—if you’re managing one or two servers, sure, it makes sense to jump in and poke around, but beyond a handful of servers that becomes a nightmare and centralized visualization is going to do wonders for your sanity without having to dive into individual machines on a semi regular basis.


thoroughly impressed by what i saw in the terminal session video! my inner voice kept saying: that's how it is supposed to be.

totally understand where OP is coming from though. most of the automation tools you cited would often have you drop in the shell because, well, they can't do it all.

this is a great project and i will be following it. even if it doesn't become widespread, i hope some of the ideas will catch and live on.

i don't want to look for binaries or configuration paths. i want inline docs as i type. and i want un-paged docs on my terminal. tired of googling documentation. hashicorp i am looking at you guys. "go doc" is the best thing ever. and yet you guys make us google for documentation all the time! i want the online multi-line completion and variable expansion i saw in the video, etc.

everything i saw looks good. great job!


Hey Author ;-)

If I can find the wherewithal to switch my native shell to one of these shell improvements such as yours I will. I think the productivity gains could be really nice. Of late, I've been forcing myself to learn idomatic bash for my shell scripting duties, though.

Perhaps a feature to think about would be to be able to transpile to idomatic bash so that people could script in your shell but ship portable bash. Then once more and more devs adopt your shell it could replace bash.


Thanks for the feedback and that's an interesting suggestion. The biggest hindrance to that is that murex has builtin support for structured data types which would mean any transpiled code might then depend on non-standard tools like `jq`. But it's an interesting enough problem that I might have a play if just out of academic curiosity.

I've created a Github issue so the suggestion doesn't get forgotten: https://github.com/lmorg/murex/issues/281


Wow I’m honored. I’ll follow the issue.


We could use pseudo pty’s to drive a remote bash through transpilation of a local shell if we wanted to fix that problem. It’s not unpossible.


Hm that is an interesting idea and something like it has crossed my mind before. Oil will be well-suited for that strategy since it's bash compatible.

I don't know enough about terminals to judge how feasible or easy it is, but if anyone does, please chime in here:

https://github.com/oilshell/oil/issues/908


How much of the complex processing (.e.g. selecting fields in json documents) could be done without having to pipe data back to the client (without relying on tools like jq on the server)?


I should read before posting. I came to say this but you said it even better.


Noob question here: why can't you install anything on corporate servers?


Adding to the other replies: because it’s not just about you. A large environment will include tens to hundreds of colleagues in the immediate blast radius of any change you make (and possibility thousands to tens of thousands beyond), including ringers from external contractors, and ranging in skill and disposition from ninja sorcerer to middle-of-the-road unimaginative plodders, and none of whom particularly wish to deal with someone’s idiosyncratic preferences.

If there’s a crisis in which your unexpected novelty is an impediment to resolution, or (worse) a direct contributing factor, it’ll be your head on a pike.

Conversely, if you introduce a tool that takes “only” fifteen minutes to learn, but a thousand people have to learn it, that’s six weeks of aggregate human productivity you just appropriated. So it’d better be worth it.

You absolutely can introduce new ideas and utilities and capabilities, but you have to bring everyone along with you, and it has to be a material benefit. Good news, the leadership skills required to do so are not innate, they can be learned.

Some organisations are better at fostering change as a matter of their overall strategy, and anyone whose professional disposition is towards constant reinvention would be well advised to seek them out.


Thank you, that was very clear. Sometimes it is hard to understand simple ideas since I never had real experience in the area.


Corporate servers can have strict rules about what software is allowed to be installed. It all depends on the corporation and what the servers are doing. Financial and health care companies are extremely risk averse. Even if the downside of installing something like murex is vanishingly tiny, the fact that there's any possible downside is enough to give them pause. Even if the new software is genuinely more productive, you may have to make the argument before a committee who's primary incentive is CYA above all else.


8 years I was working at a huge manufacturing company in a technical role, although related to a physical product rather than software. I had to ask for permission to install python from corporate HQ, and was denied...


Indeed. The problem isn't that you can't install anything, it's that the goalposts for getting software installation approved are so high that it's faster to reinvent the wheel using the tools you have.


The technical reason is that all accounts I can get access to (which doesn’t include root) are not able to call any package manager and any installed tools are “reverted” to a whitelisted set of tools every day.

The practical reason is that I’d need to convince some board of managers who have “more important shit to do” to change the default set of tools for all 10.000 servers (VM’s) for no other apparent gain than “better scripting”, which to them will sound like making it easier for hackers to extract sensitive data :-)


Just need to wait for a shell with native support for ssh


What would that look like? I’ve never felt constrained by just calling ssh.


1. When you ssh you get to continue with the benefits and syntax of your current shell. The closest I know to this is eshell which gets some access to remote hosts through tramp rather than ssh.

2. There’s a small family of commands that take commands as arguments, eg env, xargs, parallel, find, ssh. Maybe they should get special support and the shell should allow you to write subshells to be passed to these commands in some first-class way, similar to the way bash magically converts <(...) into a special file name arg with a process running redirected to that file


Ah, I get it.

Right now I just run commands remotely by prefixing ssh (actually I have aliases for the commonly connected machines). So it's ls for local list and ssh remote ls for remote's ls (actually I type remote-name ls ).

So things like $! or ^ work as expected since they operate normally, but some quoting is indeed required (e.g. I usually want a pipeline to run entirely, or mostly, remotely).

But as each invocation invokes a fresh shell the remote connection is stateless. It would be nice if the shell could track things like remote cd and prefix subsequent commands with `cd`. This is hard to get right; there are so many ways to cd or pushd/popd (especially in `()` sub shells or with !:xx kinds of shortcuts) that they are hard to find. Emacs shell tries to keep a shell buffer's cwd in sync with the shell's but easily gets out of sync.

I have used the tramp interface and it's OK.

I wonder how useful this would really be; I typically run a bunch of things remotely and then have the result pipe back into a local executable. Thus anything that ran the pipeline remotely by default would need some quotology to differentiate the two, which exists now.


`->` used instead of `|`?

How does that not severely destroy any kind of attempt of compatibility with bash?

`-` is often used as parameter to mean "read from stdin" or "output to stdout", and `>` is a redirection.

`cat ... ¦ myprog -> log.txt` would now call myprog with no parameter and try to pipe that to a file, instead of just having myprog read from stdin and write myprog's stdout to log.txt

A workflow I often use is:

`aws s3 cp s3://somewhere/file.txt -`

That copies a file to stdout (i.e. it displays the file content from S3). I would then routinely just "arrow up" and add a `> file.txt` to save it.

Something I wonder too is why writing a new shell while most of the features could be implemented as just regular programs that you would pipe to?

`file.txt -> cast csv -> [ column_name ]`

Why the need to create a special syntax with [] I stead of just writing a `filter` program that reads input optionally decorated by that type information added by `cast`?


Author here:

> `->` used instead of `|`? How does that not severely destroy any kind of attempt of compatibility with bash?

`|` tokens are still supported. I just prefer `->` because I think it's more readable. But that's a personal preference and those who disagree are welcome to use pipe instead.

> `cat ... ¦ myprog -> log.txt` would now call myprog with no parameter and try to pipe that to a file, instead of just having myprog read from stdin and write myprog's stdout to log.txt

Fair point. That issue is somewhat mitigated by the REPL automatically inserting a space at the end of commands and parameters (ie `-$SPACE>` isn't a token) but I agree there is still a risk.

The shell is highly configurable (via the `config` command) and I could easily add a config option to disable `->` tokens in the parser. But I'd wager those who complain about it the most are unlikely to be individuals who would ever convert to using the shell. Which would make pandering to those complaints a fools errand. However if you are sincere about giving this a try but put off by `->` tokens then create a Github issue requesting a `config` option to disable said token I'll gladly implement it.

> Something I wonder too is why writing a new shell while most of the features could be implemented as just regular programs that you would pipe to?

Because I wanted to :)

In fairness though, this evolved into a shell. Originally the syntax was very different and the goal was better log parsing. It was an AWK / Javascript kind of hybrid. But I'm not a massive fan of either language and a shell just organically came from the ashes of the original project.

However if you wanted to make use of murex as a command rather than as a shell then you can still do the following:

   cat example.json | murex -c 'cast json | [ Index ]'
> Why the need to create a special syntax with [] I stead of just writing a `filter` program that reads input optionally decorated by that type information added by `cast`?

Because murex pipelines aren't just dumb byte streams like in Bash. They fallback to dumb byte streams when the STDIN end is a typical POSIX command, but all other times they pass type information too. This means that `[]` doesn't need casting the majority of the time. It means all the other builtin commands can natively handle JSON arrays like plain text lists. And it means you can use TOML, YAML, S-Expressions, CSV files and so on and so forth without having to learn different tooling for each nor remember to manually cast your data correctly.

I've been using this as my primary shell for over 3 years and bar the odd annoyance with how I've implemented globbing, I actually find this quite intuitive vs bash (baring in mind I had ~20 years of experience with bash previously so definitely wasn't a noob). But ultimately it boils down to personal preference and thus I'd never begrudge someone preferring `|` / bash / AWK / etc over any monstrosity I might build. I just make this project public in case anyone else might like it.


Probably like this: myprog - > file.txt


This looks great, as a somewhat unix novice can someone explain to me the recurring fear I see where people say it wont be compatible for scripting.

Can you not separate these worlds? I.e. use an interactive shell like this or fish for your day to day, and still write bash scripts where needed on your server?

I'm probably missing something though.


As long as your scripts have a valid shebang in text executables (pointing to a “classic” shell), you should be fine


>Can you not separate these worlds? I.e. use an interactive shell like this or fish for your day to day, and still write bash scripts where needed on your server?

You can. Though you miss all kinds of ready to paste code, tutorials, tips, plugins, functions, autocompletions, etc. you can find for established shells...


Author here:

Agreed. This is one of my biggest regrets for not following POSIX syntax more closely. So much so that I did at one point start writing in support for inlining Bash (and others) code into murex as a way of allowing users to make use of the rich content already out there. But I couldn't decide on a clean way of implementing it that wasn't any better than `bash -c 'some code'` so ended up removing the feature.

As it is, I'm still half tempted to incorporate a bit more POSIX support to make code sharing easier. But there's a real risk that I could compromise the design goals of my own language in doing so.

I think this is projects like Oil Shell (https://www.oilshell.org/) have the upper hand. It's gone for Bash compatibility plus new features. I have a lot of respect for what they do as it's a much harder undertaking than what I've done.


Reminds me a bit of PowerShell, which also tried the "hey, let's do shell but with object-streams instead of string-strings". Unfortunately PowersHell is less than the sum of its parts - it's very powerfull and expressive, but ungodly warty and inconsistent, mashing up a grab-bag of good ideas from POSIX, DOS, and C# to produce a complete mess.


And then the first time you actually systematically parse and edit a giant JSON blob with it you start going, "Okay, this is a serious improvement."


I use jq to work with JSON in the terminal on macOS. Anyone used both that could compare the two?


With jq I constantly trip up on quoting issues and the query syntax. It takes me forever to write a jq query since I don't use it that often and end up forgetting its DSL.

With PowerShell, JSON can be imported as an object, which is much easier to work with since it uses the same syntax as any other object.

That said, Bash is much faster to start up than PowerShell is, so for low-latency applications like window manager keybindings, I have no choice but to use jq on Bash.


I honestly get annoyed how many platforms say for JSON "yeah, just use the native object querying syntax" while for XML they're all about XPath queries. I have to google how to XPath every time I use it.


PowerShell supports both XPath and native object syntax for reading XML.

Unfortunately, by default, writing XML isn't as straightforward. One has to use XmlDocument methods to construct an XML file. The 'Format-XML' cmdlet in the Pscx extension repository [1] does make it possible to use native object syntax to construct XML files, however.

[1] https://github.com/Pscx/Pscx


> Unfortunately, by default, writing XML isn't as straightforward.

Mhm... It can be very simple - you can construct it as a string with @"...@". Add StringBuilder for some speed if there are truly dynamic XMLs with lots of concats. Its way faster then doing regular C# mumbo jumbo and you can test validity easily with [xml].

I generated tones of XML like that and its also very fast. Don't go by the book, rules are there for mediocre :)


PowerShell - bad syntax and quirky semantics. Looks like on the syntax front they took the bad parts from bash and Perl.


In all fairness, bash is...bash. PowerShell does a really good job of striking a balance between the ease of creating horrible hacked together monstrosities in bash with the ease of terrifyingly converting between data structures in python. It obviously doesn't live up to what it could be, but there's definitely something there.


PowerShell also seems to borrow from Tcl, especially the parameter syntax (Tcl calls them options, *sh calls them arguments) and the script block construct (which is simply a curly-brace-quoted string in Tcl).


This looks great. I am not sure about the different handling of globbing, because globbing seems more common on the command line's everyday use than complex indexing; double quotes are manageable and they could be the default outside top-level context. Zsh's "don't expand words but expand wildcards" seems like a nice middle ground.


Author here:

Yeah I'm not entirely sold on my approach to globbing either. From an academic perspective I stand by my decision to make globbing functional. But from a usability perspective it kind of sucks.

The introduction of the `@g` prefix to allow inlined globbing does make things a little less painful but I'm honestly not sold on that solution either.

I'm open to suggestions / changing things up wrt globbing and will take another look at Zsh to see how they've addressed the problem.

Thanks for the feedback.


I agree on the concept being brilliant, especially as it gives easy access to use regex instead of glob, which is rather clumsy to do in normal bash.

It also prevents a lot of accidental globbing.

I would need to use it for a while to decide if the extra syntax outweighs those benefits, but I like the way of thinking!


The worst "accidental glob" is Windows Powershell, where square-brackets in filenames means you can't pipe from "ls" into other cmdlets because it interprets the square-brackets as filter commands even when they read them from the filesystem itself.


This being written in Go is a huge positive. I am curious - if I can alias SSH to force this down the pipe to wherever I'm SSH'ing to, that's a huge adoption benefit.


There are a few tools that work like this including sshuttle - transferring a local copy of the tool to the remote end.

Mosh also kind of works like this but can’t recall if you technically need the software already installed at the other end. Sshuttle definitely doesn’t.



I don't really understand copying one's .vimrc to a remote machine via ssh to run a remote vim, while vim itself has open/save-over-scp support.

It's an extra step, plus typing lag, plus my support tools needed by my vim plug-ins won't be there, et c.


Do you mind giving an example of what do you aliases do?


Hi. Author of Next Generation Shell here. I had roughly the same motivation. The solution differs widely though. I noted that the main difference of NGS from other modern shells is the attention, prioritization and investment in the programming language. The language is now in a good shape and I'm starting to work on the UI. The first UI will be CLI.

https://github.com/ngs-lang/ngs

( For the curious, readme includes references to other alternative shells )


This looks great!

I'm highly interested about the error handling and testing baked into it.

Moving away from the most common shell syntax is a bold move, but not necessarily a bad one, shell scripting needs some fresh air :)


What task do you use with a shell that could be performed better by a different type of interactive shell?

If it’s more than a few commands I’m going to be writing a script in Perl or Python. If it’s a one off I’m going to be using basic commands like cat, sed, cut, grep, sort, jq, curl etc to investigate

But what is the problem is is simple and unique enough not to write a proper utility to tackle it, but complex enough that normal bash isn’t good enough?


Calling other programs from any other language than Shell ones is a pain.

Piping the output of one program to another without a "pipe" operator is a pain.

Redirecting stdin/stdout/stderr is also a pain without ">>" or "<<" or ">&2" etc...

When I need to run many commands in a pipeline, and work with their outputs, I'll use a shell script anytime.

Bash is very good at it, but its syntax and simplicity can also be a pain that this project could solve.


Not the OP but maybe I can answer this by describing the history of murex and how the shell evolved:

This shell actually started out as a non-REPL tool (like grep, sort, jq, etc) for better log parsing were logs were structured data rather than flat test file. You can see some of the earlier influences with my Apache log parsers also on Github (eg https://github.com/lmorg/firesword). But I increasingly wanted to group, count, etc records and not just Apache logs but other structured data. I wrote a few tools that used sqlite3 but I wanted something a little more generic.

So I wrote a tool that could parse structured data and supported smarter iteration syntax (`foreach` to iterate through items in an array rather than just lines in a file, `formap` to iterate through items in a hash/map).

A lot of this stuff could have been written in Perl, but Perl is slow and I'd often want to parse through several million records of data.

Before I knew it, I'd written enough functionality for it to be a shell in it's own right. So the next problem I thought to solve was just how crappy bash is from a UI/UX perspective. I mean I'd had ~20 years of experience using bash so I wasn't scared nor confused by it. But I wanted something that could:

- parse man pages to populate auto-completions

- something that was easier to template out basic completions

- provided meaningful descriptions alongside the auto-completion suggestions

- had meaningful context sensitive hints on every keystroke

- syntax highlighting

- something that could perform fzf-like searches through the completions without having to install and configure numerous additional tools

- something with sane defaults

Basically the convenience of an IDE in a shell.

I've since added other features like:

- spell checking

- colourised errors (STDERR is written in red so they stand out)

- events (so you can define hotkeys, trigger scripts on file system changes, etc)

- plugins for smarter tmux integration. eg if I hit "F1" inside tmux the murex will open another pane to the side with the man page of the particular command or murex builtin plus a cheatsheet doc for common usages. Or if it is a function, it will print the source code of the function. So you can easily see references without leaving the interactive prompt.

The reason this has expanded into a fully fledged scripting language is because naturally you'd want to write dynamic auto-completions, events, the prompt message and other dynamic components in the same language as the interactive prompt. Thus I then added features like

- smarter error handling (eg `try` / `catch`)

- testing framework for debugging, catching regressions, etc

- namespacing so code can be written as smaller functional components without filling up the global namespace with functions that aren't intended to be called directly

- modules so that I can redistribute sane defaults as a base but have an easy way to import and redistribute extended features that might be seen as bloat to other users.

I'd still recommend Perl or Python for solving problems that need more powerful / flexible semantics - even if that problem is still a "write many read once" kind of problem. But this shell does bridge the gap between bash and Python while offering a convenient experience inspired by popular IDEs. It's evolved from every pet peeve I've had while working in the command line (if something bothered me then I'd write some code to fix it).

So it might not be to everyone's tastes but it does solve real world problems for me as it's been my primary shell for several years now. YMMV


> I'm highly interested about the error handling and testing baked into it.

The big two things with error handling is try[0] and trypipe[1] which work much like `-e` and `-o pipefail` (respectively) in bash. The difference is the syntax is structured more like Java's `try` and `catch` blocks so you can gracefully handle errors without having to check the exit code of each command.

The big divergence from your traditional shell is that STDERR is monitored and if the number of bytes written to STDERR is greater than STDOUT then the process will be treated as if it had a non-zero exit code (while not altering the actual exit code just in case `0` is meaningful for that process).

Conditional blocks like `if`, `switch` etc also treat the exit code and STDOUT/STDERR output as part of the boolean conditional, eg

    !if { which foobar } then {
        out "foobar could not be found"
    }
As for testing, I badly need to add more detail to the `test` docs[2] because they explain the syntax but not how nor why you might want to use it.

Essentially there are a few different tests:

- inlined test: this is where you write tests inside your function and attach them to a process via a special "test" pipe. When that function is executed the tests are either ignored (normally), or logged (when test mode is enabled). If the test mode is enabled then the tests attached to the process check that the process is in the expected state (eg the right type of content written to STDOUT et al, exit code, etc). This is a good way of monitoring processes in scripts and helps massively with debugging.

- state: this really should have been called "watch" since that's the crux of what it is, albeit it's not a simple variable watch, you can write shell scripts as part of the state checks so you can have greater flexibility into the runtime inspection. This is also attached to processes via a special pipe.

- unit tests: this is much closer to the kind of testing one typically thinks about as developer. Unit tests, when run, will also output the inlined and state results too and the reporting format can be changed too.

There is also a debug mode which is more verbose. It prints, unsurprisingly, debug messages I've written into the shell. But it also prints error messages that might normally be suppressed (eg in the `if` example above, you wouldn't normally want `which` to print it's error message since you only care about whether it passed or failed. But with debug turned on those error messages would also be printed).

A few other features that help with day to day operation with regards to errors:

- STDERR is printed in red text (this can be optionally turned off if it's not to your tastes)

- Builtins print not just the application raising an error but also the line numbers. This is particularly handy given the functional nature of shells (ie every structure, such as `if` etc, is a builtin command).

- Every script that's loaded, function/test/autocomplete/etc stored also logs where it was loaded from and when (all visible in the `runtime` command) so you can clearly inspect every bit of running code in murex and where it was loaded from).

[0] https://murex.rocks/docs/commands/try.html

[1] https://murex.rocks/docs/commands/trypipe.html

[2] https://murex.rocks/docs/commands/test.html

> Moving away from the most common shell syntax is a bold move, but not necessarily a bad one, shell scripting needs some fresh air :)

Yeah. That happened more by chance than design. Happy to discuss more about this if you wish but man this post is already lengthy hehe


> The difference is the syntax is structured more like Java's `try` and `catch` blocks

Which is a big plus, but the most interesting part to me is the "if" behavior you implemented.

  !if { foobar } then { ... }
Which basically translate to Bash (without STDOUT/STDERR handling):

   foobar
   if [ $? -ne 0 ] then ... fi
This is like 99% of my use cases when I'm writing scripts...

> As for testing, [...] they explain the syntax but not how nor why you might want to use it.

When I saw the examples, I immediately thought of "validation scripts" that would assert that an environment respects some specification (like the configure script from autotools ?)

> but man this post is already lengthy hehe

I enjoyed the details :)


This works the same as your example in bash.

foobar || { echo fail; }


Can you say more about how stderr is monitored? Are you capturing it with a pipe, or do you have another technique? Have you run into problems with commands that tailor their output to the tty, like clang does? Also how does it work with job control?


> Can you say more about how stderr is monitored? Are you capturing it with a pipe, or do you have another technique?

STDERR is read by murex and then printed to the TTY. It's a massive cheat but it does mean I can also colourise easily errors in red. However you can disable this interception and have processes write directly to the TTY with the following command

    config set proc force-tty true
> Have you run into problems with commands that tailor their output to the tty, like clang does?

Generally no because STDOUT is linked to the TTY (as long as the process isn't piped). But Bash does get upset if launched from within murex. Hence the following function is hardcoded in murex to wrap around `bash`:

    # Wrapper script around GNU bash
    config: set proc force-tty true

    if { $ARGS -> len -> = -1 } then {
        exec bash @{ $ARGS -> @[1..] }

    } else {
        exec bash
    }

> Also how does it work with job control?

It does, but only with external commands (builtins aren't forked processes so adding job control to them is harder - a manual job rather than something given for free from POSIX).

Additionally when you stop a running process (^Z) murex outputs some status information on said process. eg

    ~ » exec sleep 10; echo hello
    ^Z
    STDIN:  0 bytes read / 0 bytes written
    STDOUT: 0 bytes read / 0 bytes written
    STDERR: 0 bytes read / 0 bytes written
    FID 429 has been stopped. Use `fg 429` / `bg 429` to manage the FID or `jobs` or `fid-list` to see a list of processes running on this shell.

    ~ » jobs
    PID     State   Background      Process Parameters
    429     Stopped false   exec    sleep 10

    ~ » fg 429
    hello
You can also write your own shell scripts that get invoked whenever you do ^Z too (eg if you want to output additional information on said PID).

NB A bit of background on the FID vs PID. PID is obviously the UNIX process ID. But since murex builtins aren't forked they're not given a PID yet you might still want to manage that function. So murex has a "function ID" (FID) which is an additional layer of process management. It's a bit of a kludge to get around the shortcomings of not forking murex but it kind of works (the reason murex doesn't fork builtins is itself a shortcut to allow richer pipelines).


Wait it's interposing stdout as well? How do programs like vim work if they aren't connected to the tty?


It's only proxying STDOUT if STDOUT is a pipe. If it isn't (which it wouldn't be if you're running `vim`) then the STDOUT is the TTY as usual.

eg

    vim        # `vim`s STDOUT is a TTY
    vim | cat  # `vim`s STDOUT is a pipe but `cat`s STDOUT is a TTY
In that regard, murex isn't really any different to any other shell. As in `vim | cat` would produce the same "Warning: Output is not to a terminal" error in bash, zsh or murex. But the difference with murex is when two processes are piped together, murex acts as a proxy. So it's a bit like this: `vim | murex | cat` (to borrow the 2nd example above).


Looks very promising. Does anybody use for in real work? Writing scripts for something that could be buggy sounds frightening.


I am super interested in these other shells. (I think they could add a lot to my productivity but I wonder at what cost?) The thing is I am reticent to learn/invest the time into them when I know for portability I will have to stick to bash or the more common shells found on many *nix systems.


I think the crux of it is how much time you spend in the command line on your local machine. If it is substantial then picking a shell you're productive in is little different to picking a graphical desktop environment (knowing that your colleagues could be running a different DE, or even OS). If the majority of your time in the command line is on other peoples machines and/or servers, then you're unlikely to see much, if any, gains picking an alternative shell on your local machine.

With regards to murex specifically, there's nothing stopping it from running on servers but pragmatically that's an unrealistic domain to target. What I'm focusing more on is a better experience for running DevOps / developer tools on an engineers local machine. The cost of that is portability - which is, understandably, a deal breaker for lots of people.


This is pretty cool! I’ve been searching for a nice shell to unify my workflows on Mac, Windows, and Linux (WSL). Choices are few and far between, Elvish is the only thing that has worked out (which is fortunately more than decent for me), and I’ll probably give this one a spin.


Elvish author here, glad that you like it, and thanks for the plug :)

To save people a search, here's Elvish's homepage: https://elv.sh


I have a huge amount of respect for Elvish. There's a lot of parallels in design, goals, etc between what I've built and Elvish. I think we even started our projects around the same time too. And I love the fact that you've already got a community using it. If truth be told, seeing the success of Elvish and the number of people who've found it useful has helped inspire me to keep at the boring stuff of my own shell (like writing documentation) just in case anyone else happens to find pleasure in using my humble project too.

So I want to thank you for all your work on Elvish and for it's wider contribution to the community :)


PowerShell also runs on Mac, Windows, and Linux. You pass objects instead of plaintext through the pipeline which provides a rich experience. The full .NET type and ecosystem are also available when you really need a full fledged class for a thing.

There are good choices in 2021.

https://docs.microsoft.com/en-us/powershell/scripting/instal...


murex also passes objects. The difference is the objects are passed as UNIX byte streams but with an additional data-type meta field. So processes written in murex can have PowerShell-like objects but at the same time standard POSIX utilities can still be added to the pipeline with no additional effort.

That latter point is, in my opinion, the biggest hurdle to using PowerShell.

Of course, it ultimately boils down to personal preference and for some, .NET support is a killer feature.


The cross-platform pwsh is actually what inspired me to go on the hunt, and was the first thing I tried. But while it’s good on Windows, using it on POSIX just felt… wrong, somehow. I later discovered uutils/coreutils (a Rust reimplementation of GNU coreutils that works on Windows) and decided it’s easier to bring my POSIX workflow to Windows than the other way around.


WSL on Windows replaces the need for uutils/coreutils. Running an actual distribution of Linux on Windows with WSL provides an actual Linux/POSIX environment.

In my experience, there is a graveyard of tools that I tried which attempted to create native POSIX ports for Windows. Each port came up short in my workflow every time. PowerShell and WSL provide me and the teams I work with an experience that simply works.


The magic thing that powershell has is that its objects are full .net objects so you can do method calls etc and not just structured data, so you can do stuff like

    Get-Foo | % { $_.bar() }
Which is roughly equivalent to e.g. python

    map(lambda x: x.bar(), getFoo())
Does murex have anything like that? Generally serializing objects (with methods) is to byte streams is not trivial nor without tradeoffs


Not directly but you can execute code dynamically. eg

    Get-Foo | [ bar ] | source
would be the equivalent murex code.

PowerShell is definitely more sophisticated here because .bar is recognised as a method. Whereas in murex it's just a text field that you're asking the interpreter to execute as code.


Can you say more about that? So for example if I pipe to `grep`, murex will write some additional metadata info to the pipe? Won't that confuse grep?


murex breaks POSIX compliance massively because it acts as a proxy between each command in the pipeline. This means it can forward type information to processes within murex (eg builtins) but read and write byte streams to external commands.

As mentioned to one of your other replies, this causes a few issues (eg forking). But processes are still run in parallel like with a traditional shell and 99% of time this massive cheat is transparent to both the running processes and the users too.

This cheat does allow for some additional features though, like

- colourisation of STDERR (so it stands out)

- STDERR byte count used to judge if a process has failed

(possibly a few others I've forgotten but have to dash now for a lockdown Zoom party....sorry)


PowerShell maps plaintext to objects without issue. No need to drop down into bytestreams.

In the following example, 'choco' (Chocolatey) outputs a list of outdated packages in a consistent format (--limit-output). The text output is piped to the ConvertFrom-CSV PowerShell CmdLet, which maps the text output from choco into a PowerShell object.

```ps1 choco outdated --limit-output | ConvertFrom-csv -Delimiter '|' -Header 'name','version','v-new','pin' ```

For what it's worth, STDERR is already colorized in PowerShell too.


PowerShell is also unnecessarily verbose for me and wasn't even available on non-Windows systems until midway through murex's life.

Plus it's easy to cherry pick specific features between different solutions and argue they're equivalent if you're going to ignore all the other aspects where they differ. For example a big part of murex is the REPL environment:

- murex parses man pages for smarter auto-competions

- murex integrates well with `tmux` for those who want a richer tiled TUI

- murex supports vim keys

- murex give context sensitive hints upon every keystroke

- murex has an events system baked in. So you can assign your own shortcut keys or run scripts upon filesystem changes

- murex supports regex searches through auto-completions. Which makes navigating directories quicker. It makes finding application names in `kill` quicker. etc

I'm not saying any of this to be critical against PowerShell - it's a sophisticated piece of engineering and a great solution for a great many people. But the differences between what I've built and PS are far greater than the properties they share.


I wonder if bash could appropriate some of these good ideas.


To the best of my understanding, probably no.

* It's very hard to make changes to bash without breaking anything given the existing amount of code written in bash.

* If I remember correctly, bash does not have garbage collection and something tells me, it will not be easy to agree on or to add. Hello, structured data...


Could be possible to write these extensions as bash plugin libraries -- I've long thought about adding such features this way, but haven't had the time or pressure enough to really need it yet.


What happens if two communicating programs can 'speak' multiple formats--how do they choose one to use?

Could we leverage http content negotiation?


Much like any programming language that supports generics: a function will operate upon a variable based on the data type of that variable.

With murex, data types are passed down the pipeline (albeit out of band). The data can be recast, reformatted, etc but by default processes that support multiple formats honour the data type received in STDOUT and thus output in the same format.

The Content-Type HTTP head is used to determine data type when the pipeline originates from the web. So there is some awareness there. Likewise file extensions are used if content originates from a file system.


Does murex use threads? If so, how does it handle the fork/thread interaction?


How is it so that we see some applications that languages like Rust (and C/C++) seem to be tailored for, but more and more cases pop up that the author have chosen Go?

Is it a quality vs speed of dev thing? Or am I just wrong in assuming anything system-related should be written in C/C++/Rust?


Author here:

murex pre-dates Rust 1.0 and by the time Rust was stable there was already considerable code committed to murex. I have toyed with the idea of rewriting in Rust but every time I weigh up the pros and cons it becomes obvious the rewrite is just an academic exercise rather than a worthwhile venture (why rewrite something that already works?)

The advantage of Go over C++ is that you get memory safety and cross compilation out of the box. Plus Go is a much faster language (for me -- YMMV) to prototype in; which matters when I have a day job and two young kids :D


The author choses whatever they're familiar with and likes, and Rust has a larger barrier to learn, so less people use it than Go.

Not some huge mystery...

>Or am I just wrong in assuming anything system-related should be written in C/C++/Rust?

For one, Go has a GC, so memory safety wise it should be as safe as Rust.

Second "systems" is too broad a term to be very meaningful. There are excellent systems programs (databases, search engines, etc) written in Java for example...


Go makes everything mutable with no synchronization, so concurrent slice appends can be lost and map reads and writes can panic. Rust makes you decide which reference is mutable, which takes work but it’s important for the same reasons as static typing.


While that's true, it's also true that just because the reference is mutable it doesn't mean the developer is stupid enough to pass mutable references between threads without mutexes to prevent the aforementioned race conditions from happening in the first place. Plus Go also has channels which are a thread safe way of message passing.

I'm not taking anything away from Rust but a great many of us have been writing multithreaded code for years before languages like Rust made it safe. These aren't new problem I had to solve. But I'm also not complacent about the risks of multithreaded code and in fact murex's test suite does check for race conditions.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: