Hacker News new | past | comments | ask | show | jobs | submit login
Why Create a New Unix Shell? (oilshell.org)
408 points by shadeless on Jan 31, 2018 | hide | past | favorite | 292 comments



Lots of overlap in design goals with fish, except fish also places a premium on users interactively using the shell (which means friendlier in-repl experience but a balancing act when it comes to features). Fish’ auto completions are incredible, too.

Best of luck to them. Another interesting shell to check out is elvish, lots of new ideas there (even if awkward to use).

(Disclosure: I’m one of the core fish devs/maintainers. Edit: The entire team is awesome and the others deserve virtually all the credit!)


(author here) Hm I've tried fish, and it seems very nice for interactive use.

However I don't see it being used AT ALL for the cloud/linux use case? Those are the cases where you tend to get 1000+ lines of shell scripts.

For example, I mention Kubernetes/Docker/Chef, and I've never seen fish in that space.

I also don't know of any Linux distro that uses fish as their foundation -- they all appear to use a POSIX shell like bash/dash/busybox ash. fish is "on top".

See Success with Aboriginal, Alpine, and Debian http://www.oilshell.org/blog/2018/01/15.html -- these distros are built with thousands of lines of shell scripts.

Either 1) I don't know about such usage, 2) people don't know that fish can be used this way, or 3) there is some problem with using fish this way.

I link to this post in the FAQ, which I think is a lot closer to Oil:

https://ilya-sher.org/2017/07/07/why-next-generation-shell/

It's basically the "devops" use case. (And as I mention the main difference between Oil and NGS is that Oil is compatible / has an upgrade path from bash.)


It's an entrenched network effect. There's a lot of existing scripts written targeting POSIX shells, and there's an incentive to have all your distro maintenance scripts written in one language if possible.

If you want to switch your existing distro scripts to fish, you need to rewrite a lot of stuff. If you want to start a new distro, "it's written in fish!" isn't a terribly compelling selling point, since I don't think anyone is actively picking distros based on their tooling language.

For the general devops case, there's a lot of existing example code out there for bash scripts, and far less for fish. "How do I do X in bash?" is probably going to get you a decent example on Stack Overflow. Devops (I feel) cares a bit more about the installed-everywhere thing, and fish isn't a standard part of most distros.

Oil having a goal of being completely sh/bash compatible gives it much greater odds of something like Debian switching to it, since it wouldn't carry huge technical debt along with it.


The fact that there is the POSIX standard and multiple shells that meet that standard is also a big part of it. If you write in a POSIX compatible shell language you know it can run anywhere. Fish, for example, has no standard and only 1 implementation.

Any new shell that hopes to compete in the systems space will have to be POSIX compliant. At least until there is a new widely adopted shell standard.


> I don't think anyone is actively picking distros based on their tooling language.

I would use the crap out of a Linux distribution built in python.


Gentoo is basically this. I learned Python by writing .ebuilds (their package format is a short Python script that downloads and builds whatever software you're installing).


Although emerge is written in python, Afaik ebuilds are bash scripts.


Ah right you are, thanks - I remember learning Python and switching to Gentoo about the same time, but as David Hume warned us, it's hard to infer causality from proximity.



Not Python, but GuixSD is built around (Guile) Scheme, from the package manager (Guix) to the init system (shepherd).


yeah I can actually imagine that. like using an IDE based on the language it was written in to extend and tweak it more easily, or a window manager etc..

actually now you mentioned it, barring possible performance issues I'd also like a Python distro


Early Smalltalk and Lisp systems were those langs all the way down. Using emacs with Common Lisp or using Pharo Smalltalk can give you a feel for this, but not exactly the same thing as a Symbolics lisp machine or the Xerox Alto. Alan Kay has a presentation where he shows the entire OS, word processor, networking stuff, IDE, paint programs...etc was an insanely small amount of code and everything was user configurable. I really wish that had caught on more. I really enjoy Linux (although it ain't perfect). Using Windows and trying to interactively do anything with the OS is an exercise in pain and frustration. If someone could build a modern machine with an OS that is like that and had a few basic apps (web browser...etc), I would be quite happy.


I don't know if it's just me, but I don't feel like I'm using a computer when using Windows. After using OpenBSD exclusively for weeks (my laptop broke, so I had to repurpose an old netbook with OpenBSD), I felt bored and limited in Windows. I tried Powershell, but -mostly because I don't know much about it- it didn't give the same feeling.

The thing that is working for me is to explore VB.net . It is really an interesting language with many historical features. I've noticed that all the languages I use (Python, C#, Java, Go, etc.) distracted me from the essence of programming. When I started programming I immidiately thought about patterns, architecture, unit-tests, integration-test etc.

In VB.net I feel like I'm slowly reexperiencing the joy I've felt in my earlier days of programming.


"In VB.net I feel like I'm slowly reexperiencing the joy I've felt in my earlier days of programming."

Ruby was the first language that I experienced that got out of the way, so I felt like I could think about what I was trying to achieve, rather being distracted by house-keeping. Of the other languages that I've tried, I've only gotten close to the same feeling with JavaScript, Go and Python.

I suspect that a lot of it is to do with your own mind-set at the time: I first learned Python as an applications language, with all of the add-on tools and baggage, and didn't like it much, but recently I wrote a one-file, no-dependency thing to do exactly one job in Python, using just the standard library, and it was a joy.

The difference was, I think, that I did not feel the internal pressure to meet best practices, and could just write the code. For some languages, the stuff that you are supposed to do and the tools that you are supposed to use are a heavy burden.


you can turn windows into an adult computer by using cygwin (maybe mingw), autohotkey and something like window manager.

the illusion is almost real, because sometimes i also run linux in vmware/docker/... and when i am not paying attention i forget easily which os i am on.

the unix subsystem for win is quite boring as you can't mix windows applications with linux ones, for example running visual studio build from bash.

powershell would be probably worth learning, if i'd have to, but i do not. it may be godsend to win sysadmins though, as classic cmd is a 40 year old joke.


You can run windows programs out of bash (I mean the wsl bash), and you can also create wrapper bat files to use Linux tools/programs from Windows.


True, but it's a hack...not something fluid/elegant/seamless or efficient. I'm not a software engineer. I'm just a traditional engineer that wants an OS written for a power user that allows me to do my job more efficiently. Dealing with a multitude of wrappers is worth it if you're writing production software of course, but not if you write a lot of one off scripts for day to day work.


i see ( https://docs.microsoft.com/en-us/windows/wsl/interop )

that was not possible when i tried it, thanks for pointing it out


I've stated on here numerous times that Powershell falls short because pipes aren't text based as in Unix, but are really more like method chaining in .NET. Most of the time I have the output of one command, but can't easily reason about passing it to the next command.


I feel you. This guy is working on an interesting project called the Reform laptop [1]. You might have seen this already. He will eventually have it run his own lisp (called "Interim") all the way down.

[1] http://mntmn.com/reform/


Sounds neat, I'll check this out.


> an IDE based on the language it was written in

I recommend Emacs. Maybe not as a daily driver (it's a matter of preference), but for the experience. I recommend at least a month with it, make sure to write some original Lisp code for your customisations. (You WILL end up customizing it, the defaults are crap.)

> or a window manager

I recommend Awesome. The core is in C, but that's basically the low-level stuff, the actual WM is written in Lua. If you'd start with an empty init file, you'd have a moderately sized side-project on your hands to put something usable together. But if you'd start from the stock rc, there's a world of endless tweaking and customisation waiting for you...

Watch out, both Emacs and Awesome are rabbit holes.


> the defaults are crap

As someone who has recently joined the 1k LOC Emacs configuration file club, I have to disagree. The default settings are a well thought out starting point that needs minimal tweaking to get to exactly where you want. I try out a lot of packages on ELPA, and many that purport to provide alternatives for defaults end up being inferior to using the defaults with some small tweaks. The convenient thing is that you can customize the third-party packages in the same way and just use the parts you like from them.

The other good thing about Emacs is that you don't need a separate window manager[1], and you don't need to use a separate command shell (Emacs comes with eshell), or to use a terminal emulator for SSHing into remote machines (eshell over TRAMP is absolutely amazing), or a file manager (Emacs comes with dired, which works over TRAMP which is amazing), or a...

[1] https://github.com/ch11ng/exwm


My favourite Emacs goodie to highlight is magit because it really changed the way I use git. All of a sudden, it's incredibly simple to (un-)stage individual hunks, etc. And of course it works over TRAMP, too :)

org-mode is a good one, too, but I feel like I've barely scratched the surface there.


The hunk feature of magit is great; you can easily stage/unstage not only at the hunk granularity but also arbitrary lines using the selection. Being able to easily do that was something I really missed when I moved from darcs to git.

When it comes to magit and TRAMP, definitely the best part is with-editor[1]. This is one of those ingenious "why didn't I think of this?" hacks to let you use your local Emacs as EDITOR on remote machines without having to do SSH port forwarding for emacsclient.

[1] https://github.com/magit/with-editor


eshell over TRAMP? how does that work?


Any pathname in eshell can be a TRAMP pathname on a remote machine. So you just cd to a remote directory and eshell will run commands on the remote machine. You can redirect remote command output to a local file, local command output to a remote file, cp from remote pathname on machine A to remote pathname on machine B, etc.


At some point I realized that I wasted months in those rabbit holes customizing and tweaking shell configs, window managers and Emacs. Any gain in productivity (which may not even be there as it is rather subjective) was not able to offset it. So I stopped it. These days I just use Gnome Shell with a single extension to fix something that I cannot get used at all, simple bash config to setup environment for development with no customization on shell behavior, and a programming editor that mostly matches my taste. On a new machine I can finish all customization within 5 minutes and forget about it.

I even started to suspect that an easy customization of a tool can be a bad sign. It is almost like developers delegates all usability issues to the end user making the default case really bad.



Wait, what? You recommend Emacs but not StumpWM?

I've used -- for longer periods of time, of course -- Awesome, Xmonad, i3, and dwm; while some of them are better at some things than StumpWM, StumpWM is the only one that provides an Emacs, live-hackable experience. So these days, I strongly prefer that.


I've been hearing about Emacs X Window Manager but haven't given it a shot yet https://github.com/ch11ng/exwm


IIRC, Python performance is better than than bash.


Not in my tests involving repeated invocation of binaries, iirc.


Pardus existed. Many distros have bits of python used here and there.


Back when I started programming, I wrote 1000+ lines of shell scripts.

Now, I quite seriously believe that a 1000 line shell script only exists out of error. I still occasionally end up doing 2-300 line dense shell scripts, but not without feeling very dirty along the way. Either split into small, simple shell scripts (which is fine), or a different language.

In the cross platform build pipeline at work, I keep a strong discipline when it comes to scripts: They must be short (<=100 lines), and if their complexity exceeds a certain threshold (parsing more than a "grep | cut" here and there, total program states exceeding some low number), then a shell script is no longer acceptable regardless of length. And, well, it's not safe to assume the presence of anything more than a shell.

If you are writing and dealing with 1000+ lines of shell scripts, then experience tells me that you are shooting yourself in the foot. With a gatling gun.

(I used fish, btw. The interactive experience was nice, but the syntax just felt different without much gain, which was frustrating to someone who often writes inline oneliners. Unlearning bash-isms is not a liberty I can afford, as I need to be proficient when I SSH into a machine I do not own or control. I can't force the entire company to install fish on all our lab servers, nor is it okay to install a shell on another persons' dev machine just because we need to coorperate.)


Bash is flat-out not a scripting language. It is a command language. It does not support typed variables, or named parameters, or any number of basic scripting language features. It technically does not even provide an 'if' construct. It's sufficient as a 'glue layer', and the Unix toolchain is nice, but it provides next to nothing in the way of abstraction, and that's liable to become a problem closer to the 1000 character mark than 1000 lines.

My rough heuristic is, "no more than ten lines, nor more than two variables." Yes, that's short almost to the point of absurdity. The only good thing that one can say about Bash as a scripting language is that it's better than csh. Bash is taken seriously because of its longevity and ubiquity, but it's fundamentally limited in what it can express, and it is quite trivial to exceed those limitations.


Well, Perl back in the bad old days (4 and earlier) was very similar in terms of limited "minimum standard" facilities, but tons of people used it as a general purpose scripting language and hacked together some really huge systems.

It's definitely easier/safer to write longer programs in other languages, but with proper discipline it's also possible to write really big, robust programs in Bash/shells--people have done this, and some of that code is probably still running today.


> and some of that code is probably still running today.

A lot of that code is still being written today, which is one of the things that OP is trying to change.


This one gets it. What I want from an interactive shell is not what I want from a programming language.


I wondered this while reading the article. Do we really need the same language for commands & for programming? Portability is nice, but it seems like the requirements are a bit different for one-liners vs scripts you need to maintain.


I agree that the age of 1000+ lines of shell script code should be over; just replace the shebang and use whatever other language you like instead. Unless you need something dependency-free, compilation-free, and portable, then God help you because sh it is.

It's funny though, after decades of shying away from them I've now gone full circle and embraced Makefiles once more. Perhaps it's the cleanness of the bmake/pmake extensions as compared to those of GNU Make, but the determinism, zero dependencies (no CMake or meson and its python baggage), automatic parallelization, and strict error handling call to me each time I have to write mission critical "glue code" that must. just. work.


> dependency-free, compilation-free, and portable, then God help you because sh it is.

That's a fair point. Nothing beats shells for ubiquity and supportedness everywhere . . . in theory.

In practice, even people who write "portable shell scripts" (almost) never really write portable shell scripts.

If you learn the POSIX sh standard like the back of your hand, and stick only to the features in it, never using bashisms or equivalent, most shellscripts still rely on external programs (even if only grep/sed/etc) to do their heavy lifting.

And that's where you get into trouble. Because compared to the variability in behavior of even "standard" ultra-common programs, the variability in behaviors between shell syntaxes/POSIX-vs-non-POSIX shells is tiny. The instant your code invokes an external program, no matter how ubiquitous that program is, you have to worry about:

- People screwing with PATH and changing the program you get.

- People screwing with variables that affect the external program's behavior: LD_LIBRARY_PATH for dynamic executables, or language-specific globals for programs' runtime behavior (CLASSPATH, PERL5LIB, PYTHONPATH, etc.).

- External program "editions" (e.g. non-GNU sed vs GNU sed). Good luck using PCRE with a non-GNU "grep"! Oh, and if you're sticking to POSIX-only shell semantics (no bashisms), you don't even get full BREs in most places you need them in the shell; you're stuck with limited BRE or globbing, which makes editing 100-line PCRE regexes feel like a dream.

- External program per-version differences.

- External program configuration.

- Etc.

Dealing with those issues is where "self-test"/"commandline program feature probe" things like autotools really shine. Raw shellscripts, though, very seldom live up to the "ubiquity" promise.


I would always pick a make file over a script when it's appropriate - why did you shy away from them?


Probably because of how much misinformation there is about them and how prevalent bad Makefile code is, both in tutorials and in the real world. They're so fundamentally simple - it's just a directed graph - that they're easy to get wrong (mainly because they might still work.. until you need to change something). The majority of Makefiles I encounter will quickly break down under high levels of parallelization due to mistakes in the DAG (try -j64 on random projects from "huge" "high-quality" open source projects). Add to that the fact that most projects use autoconf to generate Makefiles when they don't need to, it was just a mess.

After learning and using many other "modern" Makefile replacements of various complexities (ninja, cmake, scons, tup, meson, bazel, and others) I realized if you're not using exactly what the tool was designed for, you end up recreating a Makefile (e.g.: meson is awesome for cross-platform C++ builds, but if you try to use it to build an environment composed of output from various processes that aren't C++ compilers, writing a Makefile is easier). CMake, apart from also being too C/C++-specific, would be nice if it didn't require a million different files and didn't have such a god awful language.

The only one I liked as a general purpose build system was ninja but it is too restrictive to code in (requires duplication of code by design, not meant to be written by hand though I still do from time to time) and tup, but tup is built around fuse (instead of kqueue/inotify/FindFirstChangeNotification) and so is a no-go for anything serious.

Once I embraced Makefiles, it turned out that most things traditionally built with shell scripts should actually be Makefiles for determinism. For example, I just used bmake to take a clean FreeBSD AMI and turn it into the environment I need to perform some task every n intervals (the task itself was turned to a Makefile rule), in place of where Puppet and other tools would normally have been needed, but would have been overkill for my needs.

The only drawback to Makefiles that I haven't found a clean solution to is when you need to maintain state within a rule (without poisoning the global environment). The only solutions I can see are either a) using a file to store state instead of a variable, which is just stupid, b) calling a separate shell script (with `set -e; set -x` to try and mimic Make behavior) which sort of defeats the point of Make, and c) multiline rules with ;\ everywhere, which is hideous and error-prone but works (though it makes debugging a nightmare as the rule executes as one command, again defeating some of the benefits of Make).


> I quite seriously believe that a 1000 line shell script only exists out of error.

Perhaps the best use case for Oil is to provide a debugging environment where you can figure out what your legacy shell scripts are doing, and rewrite them in another language.

> Unlearning bash-isms is not a liberty I can afford, as I need to be proficient when I SSH into a machine I do not own or control.

This is a slippery slope. I have heard things like, "don't make your own custom aliases / shell functions, because they won't be available when you SSH to another machine." Forcing yourself to always use the lowest common denominator of software is not a fun path.


Yes, absolutely. Oil has a principled representation of the interpreter state, so it should be easy to write a debugger.

(Although note that bash actually has a debugger called bashdb, which I didn't know about until recently, and I've never heard of anyone using it.)

One intermediate step I would like to take is to provide a hook to dump the interpreter state on an error (set -e). Sort of like a "stack trace on steroids".

If anyone is running shell cron jobs in the cloud and would like this, please contact me at andy@oilshell.org.

I want to give people a reason to use OSH -- right now there is really no reason to use it, since it is doing things bash already does. But I think a stack trace + interpreter dump in the cloud (like Sentry and all those services) would be a compelling feature. I'm looking for feedback on that.

Shell scripts have a very small amount of in-process state, so it should be easy to dump every variable in the program.

Also some kind of logging hook might be interesting for cloud use cases.


> Forcing yourself to always use the lowest common denominator of software is not a fun path.

Second this. Been there, and back, and there again, and recently back again.

Customize the hell out of your shell, make it your place, make it nice. You're spending your day there, every day. Treat it like you'd treat your work desk.

If it's a stranger's machine, well, OK, suffer through the 10 minutes of troubleshooting, an occasional session with busybox also helps keep "pure POSIX" skillset fresh. If you're becoming a regular there, it's time to think how to make your dotfiles portable. google.com/search?q=dotfiles


While I appreciate your perspective, I think it’s more from the “occasional SSH to a foreign server” mindset. Scaling “the way I work in a shell” to huge fleets is a nonstarter, and as a working SRE, has been the biggest thing holding me back from zsh, fish, and other alternatives.

OTOH, I’m also against, for example, installing zsh fleetwide in production fleets to accommodate choosy folks. So I’m on both ends of the problem, and know it.


- As soon as you have more than 5 boxes, you need different tools for your everyday tasks, interactive SSH sessions just don't work. But you know that already.

- So you've found yourself SSH'ing into a particular box, because that's just the best way to work on a problem. Either it's a pet box (in which case it makes total sense to drop your dotfiles there), or it's a cattle box and you don't care about the trash you left behind, because autoscaling will clean it up for you.


It's actually not that difficult to maintain context if you use completely different shells (with different-looking prompts) on your own machine vs the fleet. It's like code-switching when you talk to your friends at the bar vs co-workers.


My zsh/nvim/... dotfiles are portable and compatible with FreeBSD, macOS and a bunch of Linux distributions, but it's absolutely unacceptable to run those on a machine I don't control. You don't mess with configuration on another guy(ette)'s server.

My setup is customized, but the core experience is kept relatively untouched. If someone experiences a problem, I need to be able to debug it in place efficiently. I write drivers for the company's hardware for a living—troubleshooting takes more than 10 minutes.


> Perhaps the best use case for Oil is to provide a debugging environment where you can figure out what your legacy shell scripts are doing, and rewrite them in another language.

My approach to that problem is to just instrument scripts to write out the command line every time a tool is called, and infer the logic behind it. When I started here, build/package/test on Windows were controlled by 6000 lines of Perl... A printing version of system() and a few days later, it's 40 lines of batch and 150 lines of Python.

> Forcing yourself to always use the lowest common denominator of software is not a fun path.

Never said it was fun. Also, I need to be able to use the lowest common denominator, I don't necessarily have to be the lowest common denominator. I use a configured zsh and nvim instance myself, but I take care to ensure that if I'm stuck with vi and (ba|da|k|c)sh, I'm still productive. The core behavior of my VIM instance stays close to stock, but it has a bunch of extras like fuzzy search, linting, a nice theme, etc.

And, if I can make a choice without caring, my needs for a shell and a programming language are in direct opposition. I want my shell to put all of its energy into a useful interactive experience.


> "don't make your own custom aliases / shell functions, because they won't be available when you SSH to another machine."

If you're logging into a machine to the point you're editing files or require your customizations in this day and age you're doing it wrong IMHO.


And what would you recommend instead, then?


Doing what wrong?


My personal threshold for upgrading from shell to Python is about "I need an if statement".


Yeah I've seen that Python code. Every other line is a call to os.system('...').


You say that like it's a bad thing. Bash does not support niceties like typed variables, named parameters, or classes. Its conditional operator is an external program with a required last argument to make it look like sane syntax. Little things like interpolating variables, array access, or taking command line options are awkward and prone to failure. It doesn't have the insane syntactical issues that csh had, and POSIX is ubiquitous, but that's a pretty low bar and an inadequate justification for using a fundamentally limited language. Python could have a nicer syntax for invoking the system interpreter -- personally, I think that Ruby's backticks feel very Unixy -- but if there is a better interpreter on the system there is no reason not to use it. Even if half your code is calling out to the shell, you still have the benefits of typed variables, a nicer syntax, the ability to abstract code into classes, a variety of nice enumeration options, and all the other benefits of a "real" programming language.


declare -a myArray=("foo" "bar" "baz")

echo "${myArray[2]}"

Bash is weird.


I've had good luck with this module. Much nicer than using os.system() imo.

https://amoffat.github.io/sh/


never use os.system. os.system is a hack.

Subprocess is a bit verbose to use, though... A wrapper would be nice.


I wasn't recommending os.system() ever, OP used it in the example. Subprocess is better, as you said. I think the sh module is better than Subprocess as well.


Since python3.6 it's just subprocess.run(). The SP module was refactored. I hope that eliminates the need for all these annoying (IMHO) wrappers.


Ah, that's nice!

... unfortunately, I'm stuck with python2.4 or 2.6 on most machines my Python code has to run on. :/


> I'm stuck with python2.4 or 2.6

Not even 2.7?! 2.6 hasn't had a security update since 2013[0], I dread to think how old 2.4 is.

[0]: https://www.python.org/download/releases/2.6.9/


Python2.4 is what RHEL/CentOS 5 has to work with... And the extended support for that bloody release lasts till 2021.

You can install a newer version, but our CentOS 5 machines are meant for compatibility tests, so modifying their setup is unacceptable.


Have you tried using the scl repos from RedHat/CentOS? I use it at work yo enable different versions of Python at a system level if I am unable to muck with it because of yum.

I believe they are available for CentOS 5.x?


Every other line of my code is check_call. Could you expand on the issue you're observing?


check_call for the win! I contributed check_call to the subprocess module for exactly that purpose - easier shell like scripting in python :-)


> They must be short (<=100 lines), and if their complexity exceeds a certain threshold then a shell script is no longer acceptable regardless of length.

Well, the question is: what do you use as a replacement? For my scripts I tend to pull in some version of PHP as soon as needed. Perl is something that's universally available in anything based on Debian or Ubuntu, but it's a maintenance nightmare, and PHP doesn't care whether I use spaces, tabs or a mixture, or if there are due to any circumstances mixed line endings in a file whereas Python may or may not barf on encountering any of said things...


Anything that supports pipes and signals, and is easy to install. I've started using Steel Bank Common Lisp (SBCL) because it's the default implementation used with the Roswell installer.

https://github.com/roswell/roswell

(As a side-note: Perl 5 was really the ideal language for this, by design; we might be in a different world today if it hadn't been derailed by Perl Forever^W 6).


I mean, Perl 5 wasn't actually ideal for this, but it was better suited to the task than anything else that actually exists.


Perl 6 actually exists and has a REPL which Perl 5 lacked.


If I was ever to be religious, I would believe that PHP was the work of the devil. It's just so... uuuugh. It doesn't make sense. Did you know that you can increment a string in PHP? Yup. Of course you can't decrement it, that's crazy talk.

Anyway, I usually avoid perl. While definitely a true programming language, I really don't like it. I find that it lacks clarity, and quickly collapses into bash-like hackery.

I personally use Python when it's a notch above shell, but not enough to pull out the big guns.

(I use to write a lot of perl when I was a kid, even made an accounting system in it, so my dislike of perl is from experience. We also had a 6000 line perl build script at my current job which was the last drop. Rewrote it to 40 lines of batch and 150 lines of Python. Likewise, I also wrote a lot of PHP, and can no longer consider it a proper programming language.)


It sounds like the original build script was poorly written. Bad (or hurried) programmers write bad code no matter the language.

Are you claiming that its size was because Perl forced it to be long winded?


My dislike of Perl comes from personal experience in using it. I don't think one can have valid strong opinions of a language without having used extensively.

I feel like some programming languages, no matter the structure of the application, feels unstructured and write-only in nature. To me, Perl is one of such languages. Python has a similar effect, but I feel like it is more controllable as long as the project is kept within a sensible length.

So while that abomination was primarily the fault of the designer, I do think that Perl invites this behavior one way or another.

(I gave up entirely on reading that disaster of a build script, and instead instrumented system() so that I could trace the calls to various tools and infer the logic myself.)


If your script is over 80-120 lines, it's time for modularization. No sane person writes 1000 line scripts, and the fact that the Oil author uses such a scenario as a reason to use their shell is very disconcerting about the quality of code and I wouldn't touch Oil with a ten-foot pole.


Please read the FAQ:

http://www.oilshell.org/blog/2018/01/28.html#toc_7

I didn't write those scripts, and you rely on them, whether you know it or not.

Do you use Unix? Do you use the cloud? You rely on them. See:

https://www.reddit.com/r/linux/comments/7lsajn/oil_shell_03_...

Also, shell has functions.


I see. My apologies for being so critical. I agree that POSIX is incomplete and out of date.

However I do think it is a disaster that people can't break up their shell code into more modular components, and there is no excuse for it.

And I especially do not wish to discourage you from trying to reinvent shell, because someone has to do it, but at minimum I don't see myself moving away from my shell for the better part of a decade. Hopefully at that point Oil's community has matured to the point where I feel comfortable trusting it to be stable and free of any critical privilege escalation bugs.

Good luck!


If that's the case, then there are a lot of insane people out there. I sympathize with your opinion on how things should be (though I may or may not agree), but at the end of the day we need to look at what the reality of the world is, not on the ideal way we think people should be using the tools we already have.


Well to be fair, there are a lot of insane people out there. :-)


Just because writing 1000+ line bash scripts is something that happens doesn't mean it is something one should not avoid.

People also implement new SQL injection bugs every day—just because a lot of people do it doesn't mean we should just let it slide.


> consider these two groups of shell users:

> 1. People who use shell to type a few commands here and there.

> 2. People who write scripts, which may get into the hundreds or even thousands of lines.

> Oil is aimed at group 2. If you're in group 1, there's admittedly no reason to use it right now.

From my perspective, fish is basically the opposite of oil in that it mainly targets group 1 (admittedly I'm biased in that I rarely write shell scripts at all). IME fish scripts as a direct replacement for bash scripts are pretty rare, but that doesn't mean it's not a success if it's aimed at interactive use.


"to type a few commands here and there"

Where's "my shell is my desktop environment"? :/


/pedantic

Everyone's desktop environment is a shell.

But any shell you like is suitable for that use case since it doesn't have to be compatible with anything other than your fingers.


First, I think it's a case of momentum. All the scripts are written in shells that are distributed by default in most distros. sh, bash, etc. I write lots of scripts for fish, and I can't convince my co-workers to install fish to run my scripts. They already have bash, they already know bash, etc.

Second, I was recently chided by another fish user for writing fish scripts: "you're using it wrong" was the sentiment. The argument was it's a shell intended to be interactive, but extendable. I don't know how pervasive this line of thinking is, but who's going to learn all about their shell if they're not supposed to write scripts in its language?


I've been using fish for years but I never write scripts in it. What's the point?

There's sh which is the most portable. Why write scripts in anything else but an actual programming language when things get more complex?


The point is the compose-save-edit cycle: Compose the script in the shell, save it to a file, and then, maybe, edit it to add better argument handling or support for corner cases you didn't consider in the interactive environment.


One question though:

Do you really need 1000+ lines of shell scripts? I almost have 20 years with shell scripting and I still have the rule of not writing longer than 1-2 page shell scripts and use something else for longer projects or break it down to much smaller chunks and invoke the parts with a main.sh.

Fish is a for making software engineering and system administration easier by having functionality like command completion (without tab) and it works like a charm. I would never ever would nother to rewrite my bash scripts in fish simply because bash is on all the servers and fish is on none by default.

I am not sure what you are trying to solve here but good luck.



This is not an answer just a problem statement.


Totally off topic: great blog. I was reading the archives these days, they’re awesome ;)


Thanks! Feel free to comment (with the reddit links at the bottom of each post)


My two main comments would be:

1. Keep up the great work (both oil and the blog!) Both seem fun :)

2. Reading your blog really shows that despite the great ideas, shell languages come from an age where computing was still in its infancy. We’re now approaching our teen years, but we’re still ahead of infants. So many awkward design decisions and organic evolutions. A shame that replacing POSIX at this point is a gargantuan undertaking.


What is your take on Ion? https://doc.redox-os.org/ion-manual


Ion looks great, but its goals are different -- most importantly, it doesn't provide an upgrade path from bash. (This is also the biggest difference between Oil and fish, Oil and Elvish, etc.)

I talked with the authors of Ion on github almost a year ago. Ion was influenced by Oil, in particular this post:

http://www.oilshell.org/blog/2016/11/06.html


FWIW, POSIX emulation is on my TODO list: https://github.com/elves/elvish/issues/205

Not sure whether I will actually get to it though. Parsing sounds like a lot of headache. Your work is very impressive, but I’m not sure I will want to go through that... :)


Yeah, running ~/.bashrc is tough, and I'm not sure even Oil will get to it. Not only do you need parsing and execution of the bash language, but you also need builtins like bind and set -o vi (i.e. see "help bind").

Maybe bind is easier than I think; maybe it's just a few readline calls. I haven't looked into it.

But also, I noticed almost no shell scripts use extended globs like @(foo|bar), whereas the bash-completions package uses them EVERYWHERE. Completion scripts are almost like a different dialect than other shell scripts.

If you want to source /etc/bashrc on most distros, you're going to end up with thousands and thousands of lines of plugins from the bash-completions project.

The parsing is a big headache, but I think it forced me to learn how people use shell, which will benefit Oil.

It's been a full year since our alternative shells thread... might be time to revive it and see what's changed! Congrats on the Elvish releases!


Good point about bindings and completion scripts. Binding commands themselves are not big problems, the problem is replicating readline commands, there are loads of those.

I'm not sure how much work one needs to do to implement things like extended globbing, but my gut feeling is that once you can parse them it's more than half done. The functionalities are quite simple, as my impression goes.

I was mainly thinking about environment variables, aliases and wrapper functions when I said sourcing bashrc; that's probably what most people's bashrc files are mostly made of.

We totally should revive our alternative shells thread! I'll try to summarize my progress in the last year. Congrats on your progress with Oil too! ^_^


In the spaces you mention not hearing about fish are ones where you generally don't look for interactive applications/features. You also generally don't take a dependency like another shell when you are likely installing ruby or python already.

The two shells target different use cases. Hopefully they are both successful in attracting a good audience.


I've never met a devops person who wanted to write a 1,000+ line shell script. There are real languages that you can use instead.


I totally love fish, it's the first shell that made me replace bash as my default shell (zsh was "just not enough" to justify losing compatibility, for me).

But is this project really overlapping it? I see mostly fish as "UX" centered, while oilshell - as far as it's stated here - reminds me more of powershell (I found the idea of having a modern language behind it so cool when it was released, too bad it wasn't my ecosystem): looks like oilshell is targetting scripting more than UX.

Anyway, I love the idea of having several shells who try to have a new look on what a shell is.

EDIT: oh, btw, let's not consider no significant scripting can be done with bash, just look at the incredible work from dokku team ;) https://github.com/dokku/dokku


Why are you worries about losing compatibility? You can run bash within zsh.


Indeed Fish is bloody amazing! I've been using bash for my whole life and now as I've seen Fish I don't want to see any other shell (except some new one that might happen to be made even better) ever. It feels like a quantum leap of the same kind like the switch from ancient printer-oriented Unix shells (that didn't even let you edit the part of the command you've already entered) to bash.

As for the scripting language part, however, I have always been wondering why do people use headache languages like bash/sh when we have Python. Might anybody have a clue - I'd appreciate if you could share.


At its best, invoking commands and pipelines in shell is much more pleasant than doing the same in Python with subprocess and so on. The flipside is that at its worst, shell is a horrible language.


Well I only started reading about Oil just now, but actually it seems to be the exact complement of Fish.

That is, Fish intends to be a useful interactive shell and if it also scriptable, that because you need scripting for it to be useful. Fish doesn't make a serious attempt being the language in which system scripts are programmed in.

Oil on the other hand is a concerted effort to formalise and rigorously implement the language which existing system scripts are written in. The author believes that is a starting point for a good interactive shell -- but programming comes first.


Shameless self promotion but I'm writing my own shell, murex, as well[1]

The goals of mine are akin to Fish in terms of REPL use but with a greater emphasis on scripting.

Like Fish, murex also does man page parsing (in fact I wrote mine before realising Fish did the same), but unlike fish autocompletions can be defined by a flat JSON file (much like Terraform) as well as dynamically with code.

Currently I'm working on murexes event system so you can have the shell trigger code upon events like file system changes.

My ultimate aim is to make murex a goto tool systems administration tool - if just for myself - but the project is still young

[1] https://github.com/lmorg/murex


I messed around with oil earlier this week when I saw this posted elsewhere and I've been a fish user for about 4 years now.

I'll have to give yours a spin too. I'm glad there's a lot of shell innovation right now. I'm all for breaking posix shell standards and creating things that are way more usable. Fish's prompt customization, functions, highlighting, completion and searching are pretty amazing.

I realize a lot of these little projects will come and go. No matter what, they're great learning tools for the creators/developers, exploring what it takes to make an interactive shell.

Still, I hope we see more stuff like fish come out (and make no mistake, fish took a lot of years and a lot of devs. In the early days my instance would crash every once in a while in ways I couldn't easily reproduce). It's great that we're finally getting away from the traditional bash/zsh/ksh stuff and into newer shells that make coding and navigation easier.


Thank you and you're absolutely right about the process of creating a shell being a great learning tool! I've learned so much writing murex - even some stuff that I assumed I knew I quickly discovered my understanding wasn't quite right.


I just have to say, that's a beautiful name for a shell.


Thank you. I found choosing a name for the shell harder than writing any of the code


Haha, really? I half assumed you pulled a Knuth: the name came first, and then you had to come up with ideas worthy of it.


> Another interesting shell to check out is elvish, lots of new ideas there (even if awkward to use).

Elvish is pretty nifty, but the biggest failing point to me is that the fancy rich pipelines really work just for in-process stuff, which for me kinda loses the point of being a shell. Of course I do realize that rich (polyglot) interprocess pipelines is a difficult problem; some might say a pipedream.


Interprocess pipeline is a difficult problem only because there is no standard encoding. Elvish has pairs of builtin commands like from-json and to-json; so if your command takes JSON and writes JSON, you can use it in a pipeline like:

    ... | to-json | your-command | from-json | ...
It is also trivial to wrap it into a function like:

    fn f { to-json | your-command | from-json }


PowerShell on Windows is the only production system that I have used which passes "objects" through pipes.

It's a challenge for me to use well; not sure all that richness is composable. Better programmers than I am would know.


Thank you for your work on fish. It’s simply awesome. I love that I have only needed minimal amounts of tweaking. So far my fish.config is ~7 lines in total.


Do you have anything to say about rc https://9fans.github.io/plan9port/man/man1/rc.html ?


If you like rc, do you like es? http://wryun.github.io/es-shell/


I don't know. I don't like installing new things on servers in general.


But you do test new things you install on your servers somewhere else, at least, I hope so.


So, bash it is. :)

I love using fish and enjoy scripting with it too, but I'm hampered by this fact as well, and mostly use bash at work when others might see it or have to use it.


Spin up a virtual machine! (Or an EC2 t2.micro instance, free of charge)


Tell me if this is just me being a novice but the main dealbreaker for me with Fish was the fact that I had to re setup every thing like PATHS and stuff (tbh i'm not really sure what. There is a long list of weird customizations i've made to support various projects/libraries/etc, copied off stack overflow, that I don't remember) I would try to run commands that work in bash that wouldn't work in Fish. I would get so confused by it all that I just took fish off, as much as i liked it.


fish-shell is the first thing I install on any *nix box I work on, including macOS, VPS servers, desktop (Ubuntu), and even Windows Subsystem for Linux.

The feature I use the most is automatic history search by typing part of a command and hitting UP to search the history. I also find the scripting language more straight-forward.


In bash, you can type ^R, then text, to search that text in history interactively. Hit ^R again, if you want to see another match of same text. It saves lot of typing.


It sucks though. The shortcuts are kind of annoying (Ctrl-r, Ctrl-s) and there's no visual cues about the list you're going through.


The feature I use the most is automatic history search by typing part of a command and hitting UP to search the history.

Doesn't bash have that same feature? Or is there some subtle difference between what you're describing and what bash does?


UP in bash swaps the commandline to your previous command, it does not search in your history. either way, please don't move your fingers all the way to your arrow keys for this.


I guess I'm still missing something here. In bash you can type the first couple of characters of a command and the PGUP (or whatever key you map for this) and it searches backwards through your history for matching commands. What does fish do differently?


> The feature I use the most is automatic history search by typing part of a command and hitting UP to search the history.

This is available in bash/sh by setting this in your ~/.inputrc file:

  "\e[A": history-search-backward
  "\e[B": history-search-forward
  "\e[C": forward-char
  "\e[D": backward-char


I used bash first, then zsh. You have to try fish, it shows a "ghost" of the first matching autocompletion on the same line, which changes dynamically as you type. It also has a cool memory effect for figuring out history completions based off the directory you are in.

Like csh on FreeBSD, alt+(up/down) can be used to search history for arguments instead of lines, so if you do something like

   touch some/long/directory/path/to/file
   ....
   rm so<ALT+up>
fish will search all individual arguments (correctly handling quoted spaces, etc) for arguments starting with "so" and it'll suggest the path in question, even though the head here (`rm`) differs from the original (`touch` in this case).


That sounds a bit dangerous, especially with rm.

As always, with great power comes great responsibility I guess.


Or just use ctrl-R et al.


Hi, Elvish author here :) I am really glad that you find interesting, but I also wonder which aspects of Elvish do you find awkward? Like Fish, friendly experience out of the box is also one of the goals of Elvish.


Thanks for your work on fish. I've been using it for some time and am very happy with it.

What's the process for requesting functions be added to core? I had to write my own to get bash's 'dirs -v' functionality. My solution depends on sed and is no doubt a hack.


I love fish! It's my jam thanks for doing what you do!


After using fish for 3 years, I'm finding there is very little reason to have my login shell maintain backwards compatibility with bash.

The only time I run into issues is when a command expects manipulate environment variables via bash syntax.

I think the fish documentation WRT to scripting could be much better, but the language is more elegant than bash or PowerShell IMHO.


Interesting seeing so many fish fans. I absolutely love fish. It makes my everyday shell usage so much nicer. But it seems like a totally unknown shell to most people. I never see anybody else use it at any job I've had.

I did use fish a bit as a script language, but I decided for anything of any size I much prefer Julia. For typical file system navigation, fish is better, but Julia is actually pretty decent as a shell, despite being a real language. So writing shell scripts in it is pretty nice.

In the beginning I wrote separate programs executed from fish shell. But now I just fire up Julia as a shell and run functions directly there interactively.


I remember a few years ago poking at julia and thinking it would make a really good shell language. The thing that killed it for this use at the time was slow startup; is that better now?


Much better, it's certainly worth giving it another go. It's still much slower than Python, but it's quick enough that I don't notice it all.

  $ time julia -e 'println("Hi")'
  Hi

  real    0m0.241s
  user    0m0.216s
  sys     0m0.196s
  $ time python3 -c 'print("Hi")'
  Hi
  
  real    0m0.046s
  user    0m0.020s
  sys     0m0.000s


Inspired me to install and try; about 350ms on my macbook pro. Much better than it used to be, but still more than you'd want for everyday commands (at least if you're picky about having your computer feel responsive, which I am). :-)


I sort of agree that there isn't any strong reason for an interactive shell to have 100% bash compatibility. However, when I tried to convert to using fish I found my muscle-memory used too many simple history substitutions like "!!" and "!$" (which actually pre-date bash; they arrived with csh 40 years ago!) which were missing. Ultimately I gave up and went back to bash.

It sort of is an "uncanny valley" for a text interface. It feels close enough to a traditional UNIX shell that I start to interact with it like one... but has enough differences that I found myself constantly tripping over them.


It was difficult to implement !! and !$ in fish (I rarely actually used them in bash, so it wasn't an issue for me) and I know others who found that annoying. However if you dig around, there are a couple of solutions on Stack Overflow and blog posts that add in that functionality using plugins or functions. It's not exactly the same, but some of them work pretty well.


You may be underestimating the importance of these particular operators to experienced users. I at least would be very dissuaded from trying an alternate shell without a simple way to enable these.


I use !! maybe ten times per day. !$ can be implemented in your PS1 easily.


I used to. Now I just Ctrl-P.


> The only time I run into issues is when a command expects manipulate environment variables via bash syntax.

And in my experience 90% of those are in the form `FOO=bar command` which can be replaced with `env FOO=bar command` and works just fine in fish.


Both support for `&&` and `||` (instead of `and` and `or`) as well as supporting `FOO=bar command` are under consideration for fish 3.0 to ease the migration path. The former is pretty much going to happen, the latter if we get around to it, DV.


This would be much appreciated. I know there are a few people on my team that cam't use fish due to our npm scripts needing to be compatible with cmd


Why don’t your npm scripts specify /bin/sh as the interpreter?


Where things get problematic is with commands that send a set of environment variables to stdout, like

    eval $(ssh-agent)
Sure you can get addons (like bass[0]) that will translate the sh environment variable settings to fish, but it’s a pain to have to do that (and remember wth it was called).

[0] https://github.com/edc/bass


Non-bash compliant shells suck from the Google, copy, and it works perspective. I even shy away from using zsh because most setups assume bash.


Just launch a subshell and problem solved. I switch between zsh, bash, cmd, powershell quite frequently.


Same. The assumption that both types of users need to have their use case solved by the same language is foreign to me.

I use Fish as my shell, but I do scripting with Bash. There’s nothing that prevents you from doing so unless you’re sourcing a file.


The idea to use a real programming language as a shell is a common one but I'm not sure it's really a problem that can be solved without reworking the kernel interface. Whatever you do pipes will still be byte streams, error handling will always be using integer return values, you'll always have stdin, stdout and stderr, job control and signal handling will always work pretty much the same way. The kernel interface exposes an interface, the userland app expect the shell to behave in the certain way, there's not a lot of wiggle room to make things differently in between the two.

Not that the POSIX-like shell syntax is not all sorts of clunky and odd but I almost consider it a feature, it's a deterrent to force you to move to a "real" scripting language when the concepts become too complex to express in a shell script.


The whole point of these projects is to evolve the shell, go beyond the current bash standard to make the work with the shell more fun and more productive. Unix may seem to have limited default interfaces, but this was by design - the idea was, the programmer/user knows much better what he needs so let him build it. The system's role was shaped to provide robust universal mechanisms. Also, the current standard shell is very limited when compared to even existing unix interfaces. For example, the kernel provides select/poll system calls but standard shell has no facility to use them, so simultaneous processing of two or more data streams without blocking is not currently possible. A new shell could finally provide them.


Finally? You don’t need a new shell. Write select(1). Job done.


I've been considering this problem myself and I believe there are ways to get around the limitations you've described. It's not pretty (from an ideological perspective) but it does work and is mostly invisible to the end user. The problem is when users hit those weird edge cases where the hack becomes visible. :-/


> However, Python and Ruby aren't good shell replacements in general. Shell is a domain-specific language for dealing with concurrent processes and the file system. But Python and Ruby have too much abstraction over these concepts, sometimes in the name of portability (e.g. to Windows). They hide what's really going on.

Excellently put. POSIX shell languages have fantastic capabilities you just can't get in most other languages. I would love to see a more safe, more sane shell language gain enough popularity to change the narrative that "shell scripts are dangerous and impossible to maintain."

The contrasts to Python and Ruby made me think of xonsh[1], a Python-based shell that can dynamically switch between Bash-like syntax and standard Python. It's not quite ready to become my daily driver, but I'm still excited about it.

[1]: https://xon.sh


Shell is my favorite domain-specific language. But many (including myself) would argue that domain-specific languages are generally better embedded. Many projects aiming to mixing shell with general purpose languages find a nice embedded DSL for subprocess/pipeline management. Some others find a convenient way to run shell commands or pipelines by mixing grammars and trying to disambiguate them. [Shameless self-promotion] I've been working on a project that aims to not only have a nice DSL for running process pipelines, but focuses on making a syntax for using all functionality of the host language as a command language, called Rash[1]. It is hosted in Racket, and can be embedded in Racket at the module or expression level, and also normal Racket expressions can be embedded in Rash (they can alternate arbitrarily deep). It supports process pipelines, Racket function/object pipelines, and mixes of the two. It's still alpha and has a TODO list a mile long, but I've been using it as my daily driver interactive shell for months and have loved it so far.

[1]: https://github.com/willghatch/racket-rash


FWIW I have your project on my wiki page :)

https://github.com/oilshell/oil/wiki/ExternalResources

I guess what you mean by embedded is that it should be an embedded DSL in a full-fledged programming language? I don't quite agree, since there are at least 20 projects like that on the wiki page, none of which is popular.

Probably the most popular one is eshell, in Emacs Lisp?

But if there's something I don't know about I'd be interested in hearing it. This idea goes back at least 20 years, e.g. to scsh. And it hasn't taken off.

But certainly I don't begrudge anyone if their favorite environment is Racket and they want to stay in Racket. That's a perfectly reasonable thing. It's just not what I would expect anyone besides racket users to use.

One reason I'm interested in shell because it's the lowest common denominator between different groups of programmers. C/C++ programmers, use it heavily, Python, Ruby, JS programmers, Go, etc. Everybody uses it.


Yes, actually I've looked over all the shells on that wiki page. I think most of them haven't taken off because they either their host language has been poor or unpopular, their design or implementation wasn't great, or they haven't really solved the right problem. I think the idea of embedding a shell into a general-purpose language still has a lot of merit. Most of those projects trying to embed a shell DSL into a general-purpose language, scsh included, are basically for programming and not interactive use. Shells that are only interactive or only for programming end up fulfilling less than half of the purpose of shell, in my view, because the interaction and scripting feed back into each other in a virtuous cycle. The only ones on that list aside from Rash that try to be embedded in a general-purpose language while being good at both interactions and programming are Xonsh and eshell. Xonsh is pretty new (I wasn't aware of it until after I had made Rash), and eshell is in Emacs Lisp (which is not a very good programming language or platform to build on for anything except extending the emacs editor).

Rash also tries to be better than the competition by adding object pipelines (much like Powershell, it makes it much more reasonable to make system administration commands in the host language, and have rich interaction and inspection of command results), user-defineable pipeline operators, and generally tighter integration with the host language while still having a light syntax for basic commands.

I would like to be able to have my command language tightly integrated with my programming language, and be able to define my system administration commands (and interactive shell plugins and completion) as functions in the host language (while still being able to naturally use external programs). And I would like my shell scripts to be able to grow and potentially even turn into full-fledged programs, or modules of some larger program. I think there are a lot of benefits to the approach I'm using (which would be too long for a comment here).

That said, I'm not holding my breath for it to catch on widely any more than I'm holding my breath for Racket to take off as a popular programming language (although I frankly wouldn't mind either one). I think a better Posix shell is certainly a noble effort, because whatever better shell does become popular, we certainly need one. And an automatic upgrade path for existing scripts sounds great. So I salute you and wish you good luck with it. Also, as someone looking at shells and their features to copy the best ones, your wiki is a great resource. So thanks.


OK great, glad you have made use of the page. We had an "alternative shells" thread about a year ago between the authors of Elvish, NGS, Oh, and mash. Those were the main active/nascent shells I could find.

It might be time to start that again to see what ideas people have and what has changed. If you're interested e-mail me at andy@oilshell.org.

I'm also interested in the Shill shell, which as I understand was written in Racket, and then somehow they moved away from Racket? I'm not sure. I think it was because of the runtime. I also saw some efforts to move Racket to Chez Scheme.

I very much like Racket as an idea -- a meta language -- but I haven't gotten a chance to play with it too much.

And I did experiment with femtolisp as a basis for Oil -- the lisp used to bootstrap Julia -- but I decided against it.


Thanks, and you have put it very well too. As I've learned from many of these threads [1], there most certainly is a narrative that shell scripts are dangerous and impossible to maintain. People are really angry about it!

And of course I agree with that! That's the whole reason for Oil.

[1] http://www.oilshell.org/blog/2018/01/31.html


I disagreed with this strongly.

> concurrent processes

Job control is disabled in shell scripts, and your only other option is juggling process ids. Combined with nearly nonexistent exception handling using anything more than a single process at a time is like pulling teeth.

Things like ssh have super awkward workarounds using control pipes.

I'm not sure I've ever seen a shell script use concurrent processes in the wild.

Python's subprocess library is excellent and makes concurrent processes a breeze.

> file system

With some minor exceptions, I can't think of any FS ops I'd do in shell that isn't just a couple letters longer in Python and 10 times more flexible.

The only reason to use shell is that it has a simple syntax for pipelines.

I rewrite any script > 2 lines in Python and have no regrets.


I essentially solved these problems with fish shell and Julia programming language.

For all sorts of interactive stuff I use fish, because it works the way you want for the most common tasks. I can use it to really quickly match and get back previous statements or do a completion.

Also much easier to configure and grok than bash, because it has saner syntax and is a simpler shell language.

However when writing shell scripts I use a real language like Julia. It integrates very well with the shell world so I don't find it problematic to do this. It is very easy to read output from processes and pipe stuff. Much nicer than say Python or Ruby.

You got built in syntax to deal with shell stuff, but then didn't make it crazy so you end up with a mess like perl. Julia is actually a very clean and nice language. Which also happens to blistering fast and have LISP style macros.


I see that this person has opted not to use python 3 because it is 'less-suited to shell-like problems' than python 2.

In an effort to understand the reasons for actively choosing against 3, does anyone know what problems those would be?


(author here) I was hoping I would have time to write part 2 of the FAQ before this hit HN, since that is definitely a FAQ.

tl;dr I used Python for prototyping; it will be removed.

Consider it an implementation detail -- building and running Oil does not require Python, as a portion of the interpreter is bundled in the tarball. Python 2 vs. 3 doesn't really matter. It was in Python 3 at one point.

More discussions here:

https://www.reddit.com/r/ProgrammingLanguages/comments/7tu30...


From the post:

> I encountered a nice blog post, Replacing Shell Scripts with Python, which, in my opinion, inadvertently proves the opposite point. The Python version is more difficult to write and maintain.

Here's the link: https://medium.com/capital-one-developers/bashing-the-bash-r...

I think, roughly speaking, the fact that Python 3 is much closer to a sane language for engineering means that it's less suited for scripting.


The few times I tried to de shell like things in Python, the absolute pain of actually running commands and getting to their output might be part of the reason.

What is A=$(cmd) in bash is an absolute pain in python.


That was my main pain initially after I finally switched away from writing my scripts in Bash, but it actually turned out to be pretty easy. I wrote a quick helper function that does 99% of what I need:

def _call(cmd_str): return subprocess.check_output(shlex.split(cmd_str)).decode("utf-8")

Definitely a verbose monstrosity compared to doing it in bash, but more than worth avoiding the garbage fire that is Bash. And it only works for simple cases


Thank you for this.


Have you tried plumbum or sh.py?

They work well, if a bit magically, until you need to background a process.


Use xonsh!

Xonsh is an awesome fishy she'll that's a Python superset. $() is built right in.


A while back I wrote a little test snippet[1] to see how good such an interface can get in python.

    _("ls -la") | _(lambda x: x.split()[-1]) | _(lambda x: os.path.splitext(x)[1])
I wonder if there's a complete version of something like this out there. You can probably get pretty far staying in Python-land, plus, everything else is free (data types, standard library, adoption, etc).

[1] https://gist.github.com/pnegahdar/726cf2c65fc561db7831


Try playing with __getattr__, I'm sure "_.method" instead of "_(lambda x: x.method())" could be a thing.


subprocess.check_output(cmd) runs a command and returns the output, raising an exception if the command fails.


you can do this in ipython

A = !cmd


One reason I can see is that while Python 3's unicode handling is saner for most of usecases it does not work the way one would expect in unix shell.


Python isn't going to reconfigure an ASCII shell on your behalf. It's up to you to enable a Unicode locale and then PY3 just works.


The problem is bigger than misconfugured locale, namely: what is the encoding used for notionaly text zero-terminated strings that pass through the unix kernel (eg. filenames and program arguments)? There is no way to reliably and portably deduce that from locales.


Finally a modern shell that understands the importance of COMPATIBILITY! This gives it a realistic chance of getting real adoption. Shells like zsh and fish will never get mainstream adoption because they are not compatible with bash.


Why do people care about "COMPATIBILITY" so much w.r.t. shells? It's so easy to use other shells to run your script.

> /bin/bash your_script.sh

And if your script is written with bash in mind, use a shebang:

> #! /bin/bash

And it will work perfectly fine on fish. As long as I have a bash binary, why do I need COMPATIBILITY?


Why do people care about "COMPATIBILITY" so much w.r.t. C compilers? It's so easy to use other compilers to build your program.

> CC=gcc make

And if your program is written with gcc in mind, put that in the Makefile:

> CC ?= gcc

And it will work perfectly fine on LLVM systems. As long as I have a gcc binary, why do I need COMPATIBILITY?

----

A big part of the long-term objectives of OSH is that it provides a way to move to a better language, without having to entirely rewrite your codebase. Think of it in a similar spot to C++ (originally); a big part of the design is that your old C (bash) code is already valid C++ (osh), and you can start using new C++ (osh) features wherever you see fit in the codebase, without having to rewrite anything first.


I'm very confused by your analogy. In the same way that I can hold onto a binary for /bin/bash, I can do the same for gcc-4.7 or whatever. If your software depends on a very specific version to remain compatible, you just keep that thing around which is precisely what the CC env variable is for. Eventually, you'll want to add / change / or update that software, and that's where it's nice that updated versions remain compatible so you don't have to rewrite everything in your codebase.

But a shell script is a script, not a codebase - they're written for one-off scenarios in very specific environments. I imagine that some folks have very large, complicated systems that depend on many bash scripts. But in such a case one can still use fish to just execute the bash scripts and pipe the output around as needed. If you have a single, large bash script that does some really complicated stuff, you can begin porting it by separating it into smaller scripts, calling those from a modern scripting language like Python and rewriting aspects of the process as you go.

A C or C++ codebase is in no way equivalent to a bash and scripting codebase since C and C++ are not nearly as easy to call from different versions - you would have to split up your build process across different versions of a compiler and build a library to accomplish the same thing.


> I'm very confused by your analogy. In the same way that I can hold onto a binary for /bin/bash, I can do the same for gcc-4.7 or whatever. If your software depends on a very specific version to remain compatible, you just keep that thing around which is precisely what the CC env variable is for. Eventually, you'll want to add / change / or update that software, and that's where it's nice that updated versions remain compatible so you don't have to rewrite everything in your codebase.

Exactly? What you just said is true for CC and SHELL; I'm not sure what is confusing.

> But a shell script is a script, not a codebase - they're written for one-off scenarios in very specific environments.

Ahh, there it is. IMO, a big part of the reason why many people think that people shouldn't write shell scripts is that they've only had to deal with shell scripts written by people who didn't treat it as a real codebase.

It's hard to say what private codebases are doing but:

- Docker, which is expected to run in many different environments, has thousands of lines of shell. That isn't one-off, and isn't for a specific environment.

- Most GNU/Linux distros can generally be thought of of many separate upstream packages glued together by many thousands (if not millions) of lines of shell. It might be expected to run in a fairly specific environment, but it isn't one off, and is massive enough, I'd have a hard time not treating it as a codebase.

> ... you can begin porting it by separating it into smaller [programs], calling those ...

> A C or C++ codebase is in no way equivalent to a bash and scripting codebase since C and C++ are not nearly as easy to call from different versions

C is very easy to call from different versions, the ABI(s) has been very stable over the years (unfortunately, this isn't as true for C++). Just say `myobj.o: private CC=gcc-4.7` or whatever compiler is needed for myobj.c, to set the compiler for just that object.

You seem to be saying "I don't need C (bash) compatibility, I just need an FFI that makes it easy to call C (bash) code."

C/C++ and Bash/OSH are different, but not that different.


One of the werid cases with fish as default shell is scripts and programs that expect bash when executing “system” commands. For instance a php or Java program running shell commands.

Having bash compatibility is a quality of life feature, no need to debug all the werid cases where it breaks for purely syntax reasons.


Right, the system() function in C, which PHP probably uses, and is os.system() in Python, is DEFINED to run /bin/sh. You never know when a program might call it (although I agree it is almost always bad practice to use it -- use fork/exec instead.)

So basically you should never make /bin/sh fish, because then you will no longer have a Unix system (according to POSIX).


Not sure I understand what you mean. /bin/sh is the system shell, I don't believe it's possible to make it fish unless you change the symlink itself.

On ubuntu, /bin/sh is always dash and it's not possible to change without changing the symlink:

https://wiki.ubuntu.com/DashAsBinSh

So, if you use chsh to make your shell fish, it would have no effect on os.system or the system function in C.


PHP somewhat takes the user’s login shell, so I guess it’s yet a different system. If it was taking /bin/sh it would have been fine actually.


Don't blame the rest of the world for PHP's brain damage.


That's just lazy programming, and if anything, we should have more incompatible shells so that developers will start to write their programs to stop shelling out and run individual programs, or if they do shell out, actually declare the shell you're using at that point and don't assume it will be sh compatible.


Personally I leave bash as my login shell and just set my terminal emulator to launch fish on startup.


To copy and paste stuff from the internet.


That's a great argument for fish.


That you can't?


Why not to copy and paste:

1) what you paste might not be exactly what you copied. i.e: http://thejh.net/misc/website-terminal-copy-paste

2) possible licensing issues

3) it enables cargo cult programming


For 1, my configuration of zsh solves the problem and avoids surprises by marking pasted shell code and letting me review it before running it, even if there are several lines. This is great for just pasting stuff from the internet and adapt as I wish without having to retype the entire thing or use an editor. I think this is a default behavior in oh-my-zsh.

Actually, my zsh config mostly behaves like fish with great completion, syntax coloring and sensible history handling, with zsh syntax which is close to bash. So I can copy paste stuff from the internet and reuse my knowledge from the time I used bash because it was the default, and still leverage the improvements brought by zsh.

I don't understand 2.

For 3, of course you should understand what you run, but I don't want my tools to get in the way. My tools should allow me to do what I want, not prevent me from doing something for technical reasons. This is a question of education. If people want to run something without understanding it, I bet they will type it blindly, if they can't copy paste it. They will just be slower at running things blindly, leading them to a disaster just a bit slower. Please let me not lose my time even more when I cause disasters.

(Yes, true, retyping stuff forces your more to think about it, but I already review the things I paste into my shell)


Okay, bracketed paste does not help against attacks.


If you use the same shell interactively as you script in, then you only have to learn the one language.


Except Oil has both POSIX compatibility and Oil. It's not much difference from Fish, where you can largely ignore the built-in scripting language and just write bash scripts if you'd like.


Yeah, with oil I agree. It appears that fish is sometimes incompatible, though; there's a comment down thread about 'Foo=bar baz' working differently, and I have actually used that interactively.


... and if you're going to disrupt existing shells - why would you not use JS? (I don't care for JS, but it is the lingua franca du jour)


>Shells like zsh and fish will never get mainstream adoption because they are not compatible with bash.

zsh and fish don't really belong in the same comparison IMO.

zsh is, like bash, a ksh-like shell with a Bourne-style grammar. Obviously it depends on the exact use case, but in practice, for basic scripting purposes, it is almost a super-set of bash, and it provides several emulation options (like sh_word_split) specifically designed to increase compatibility with POSIX and with bash in particular. It even provides shims to support bash completion functions. (It is fair to point out that, even with all the emulation stuff enabled, it's still not completely bash-compatible, nor POSIX-compliant. It's close enough that the changes required are usually extremely trivial, though.)

fish on the other hand has its own completely different grammar and makes no attempt to provide POSIX/ksh/bash compatibility at all.


> Shells like zsh and fish will never get mainstream adoption because they are not compatible with bash.

Zsh isn't a new kid on the block. Both it and bash are actually about the same age: 28 years:

https://en.wikipedia.org/wiki/Bash_%28Unix_shell%29

https://en.wikipedia.org/wiki/Z_shell


I and most other developers I know use zsh, because it's focused on features interactive use and not scripting. Who cares about bash compatibility? Bash scripts are run by, well, /usr/bin/env bash. For DevOps/IT I could understand the concern.

e: sorry some connection interruption and now this is redundant to other comments.


As someone who now seems to send patches for gratuitous and unnecessary Bashisms in purported #!/bin/sh scripts in Free Software on a regular basis, you had me in agreement about compatibility right up to the Bash part.


If the questions asked on Unix and Linux Stack Exchange are anything to go by, the Z shell already has mainstream adoption.

But of course your argument has the fundamental flaw that the Bourne Again, Korn, and even Bourne shells were not compatible with their various predecessors, but that turned out to be not as problematic in practice as you paint it to be. And we've had all sorts of things gaining "real adoption" over the years, from Norton Commander clones to Perl. The world is nowhere near as narrow as you think.


You can always invoke a script and have it execute with the right shell using the shebang line.

https://en.wikipedia.org/wiki/Shebang_(Unix)

I used fish for years, never had a problem with bash or zsh scripts.


NodeJS is doing all right. Ducks!


Love it. I think there's plenty of room for innovation in this space. I'm currently in the process of porting a ~700 line bash script (I didn't write it) to Python. Although longterm I think this will be much better for the project, there are still tradeoffs. There are some things that are just so easy to express in shell language, like composing transformations via pipes. Sure, python can do it, but it feels clunky in comparison to me. I would love to see a language like python (including package ecosystem) written with shell use as a first-class citizen.


There's definitely room for innovation - fish is for me but I'm always impressed with the work being done on both new and old shells. One thing fish doesn't do is get much into the semantics of how processes interoperate, and I'm interested to see if there's a new idea that can gain some traction in that regard.


I checked out Oil previously which looks nice but more of an incremental improvement over Fish/ZSH rather than a significant evolution (it may have changed since then, this was last year).

Im most excited about Elvish shell and the language that's being developed around it. The shell is built with Go and feels super fast compared to my plugin-heavy ZSH. The language design is quite nice too but still very alpha. Looking forward to see what it evolves into...

https://github.com/elves/elvish


FWIW, OSH is the incremental improvement, and Oil is the new language (explained in the intro to this post.)


I'm aware of the difference, but they are both very much integrated into a single UX. The language was what I'm most interested in because writing ZSH is a giant headache even though I've been doing it for years, it' s still painful. That plus the performance of the shell itself.


@chubot: There is one use-case I come across every once in a while:

https://stackoverflow.com/questions/356100/how-to-wait-in-ba...

So whenever you want to do things in parallel there is probably a limit to the number of processes you would like to execute in parallel (e.g. the famous compiler limit formula: number of CPU cores +1). It would be great if Oil could support such a use-case out of the box, as easy parallelism without the ability to artificially limit the number of parallel executions is often useless.


Absolutely. In fact, the bash manual explicitly refers to GNU parallel for this use case!

I use xargs -P all over the Oil codebase, which does what you want. The trick is to end the file with "$@", and then invoke xargs -P 4 -- $0 my-func.

That way xargs can run arbitrary shell functions, not just something like sh -c "..." ! I'm going to write a blog post about this. I also do this with find -exec $0 myfunc ';'

https://github.com/oilshell/oil/blob/master/test/spec-runner...

However I think Oil will have something built-in to make this parallel process more friendly. I will probably implement xargs so it can run my own shell scripts without GNU xargs, but then add a nicer syntax. (Probably "each", since that's what xargs really does.)

This gets into your other question about standard utils, which I'll answer now. Short answer: yes I would like something like that, it's just a matter of development time and priorities. I agree with the problem you point out.


Sounds pretty cool. My biggest problems with xargs is that I had constantly some weird edge cases, so I try to avoid it. As GNU parallel doesn't seem to be part of standard installations it would be an external dependency for a script, which I try to avoid too.

So I ended up using the loop syntax:

  for i in {0..9}; do
    echo "$i" &
  done
  wait
It is not so Unix like, but I find it easier to debug. It would be great if Oil would have a solution for limiting that kind of parallel execution too. I am aware that this isn't simple as there are different options here how to implement it (global limit vs. local limit vs. named limit).

Just an idea from the top of my head an idea for an optional named limit:

Lets call it 'flow':

  flow [options] [command]':
  -n number of max parallel processes
  -c (optional) identifier of the counter.

  Example:

  for i in {0..9}; do
    flow -c myCounter -n 4 echo "$i"
  done
Just an idea.


I spent the day making a virtual terminal ... I think the terminal is an unnecessary layer. All program UI is limited by this 40+ year old technology that is the terminal. Instead of making a new shell, make a shell + new user-interface.


Actually, far from all, most program UI is not so limited.

Learn from history. In the 1980s the world improved on the terminal paradigms, with TUIs that included things like directly addressable output buffers and unified and standardized keyboard/mouse input event streams. In parallel, GUIs took hold, and there are nowadays a lot of GUI programs in the world.

* https://news.ycombinator.com/item?id=16014573


I agree, but the first step is to make a shell. I imagine it will be like Vi or Emacs -- it can write to a terminal, or have its own UI.

I have been keeping a wiki page:

https://github.com/oilshell/oil/wiki/Interactive-Shell

Although honestly I won't get to any of this in the near future.


That's an interesting comment. One downside is that I believe it would necessarily be tied to a particular choice of OS, whereas with the standard separation, our shells can be cross-platform but our terminal emulators are OS-specific.


You might be interested in this: https://github.com/withoutboats/notty


https://github.com/withoutboats/notty/issues/67

Dead project according to the author, though.


A day? It would take me a month reading the kernel code just to know where to start and what to replace.


I half-hearted implemented the "vt100" ANSI Escape sequences in the most naive way possible into an existing GUI application. Did not touch any kernel code.

I think the modern "terminal" is the Browser. With URL's instead of file paths. But I think it's maybe time for something new. In the 70's we got the terminal. 20 years later we got the browser. Now another 20 years have passed. What's the next step ? Terminal -> Browser -> ?? -> AI ?


A little tangential, but I keep wondering if Apple is developing its own shell or will adopt one with a more liberal licence such as Oil or Fish.

I mean, they can't keep using Bash 3 forever, right? (Hope)


I find it interesting that Apple actually downgraded to an older version of bash at some point to avoid the GPLv3. I reference it in a post about Open Source and business I wrote two years back:

http://penguindreams.org/blog/the-philosophy-of-open-source-...


I really don't understand why they haven't made ZSH the default. It has a much more powerful REPL and a more permissive license.


I'm not a frequent Mac user, but I wonder about the bash 3 thing too.

Maybe they just expect you to install your own shell? I think a lot of people do that with homebrew?

I think Apple has largely expunged shell scripts from the startup process with launchd too? That is like their systemd.


fish is released under GPLv2 and afaik Apple has a policy to not include any new GPL software in macOS.


I think GPLv2 is fine, GPLv3 is what stopped them from using new versions of Bash. But I'm sure they would prefer MIT, Apache, BSD, etc


Ignore the naysayers. I have encountered them when posting my shell activities. You're doing great! Can't wait to try the final release. Maybe even get it into my workflow.


So many shells, but how come the virtual terminal hasn't got a revamp or popular alternative? I'd like to see more control over keybinding (e.g. on my machine pressing ctrl+del sends the same character as pressing ctrl+backspace). I understand this would be more a kernel change than a userland one.

Screen tiling and visual 'tabs' would also be welcome additions. Not everyone needs a graphic environment, and I refuse to install X just for better keyboard shortcuts on my terminal.



dvtm does tiling and tabs

Since dvtm also works as a terminal emulator, it seems to me that you could use loadkeys to setup various keycodes to send the proper vt100 escape codes. I've not tried it, but see no reason why it shouldn't work.

If you just want "No X install" you can use a frame buffer terminal (fbterm was one I used to use, but it doesn't appear to have been updated in a while, perhaps there is a spiritual successor, or maybe it already does what you want)

[edit] YAFT https://github.com/uobikiemukot/yaft looks like it's more up to date than fbterm.


Sounds pretty cool and everybody who had to learn bash scripting at some point understands why we need a sane language (my favorite are misplaced spaces in if statements...; Disclaimer: I do and love bash scripting but while the language has cool concepts, some things are just broken by design).

Nevertheless, there is one piece in this puzzle I am missing. There does not seem to be a process which manages the 'core software set' across platforms. So after decades we finally have a shell which is available on most operating systems, but how long will it take before Microsoft, Apple, Oracle, etc. will adopt a new shell?

So why don't the large OS corporations form a consortium to define something like a 'cross platform run time environment' standard (maybe together with the Linux Foundation and some BSD guys?). I mean its not so much about which shell someone prefers, but more about a common set of interpreters and maybe tool kits. And even more than that it is not about the state but the process/progress.

What do you think, do we need such a process or is there another way to solve the cross platform dilemma?


What you are describing has been around since the late 1980s, IEEE POSIX: https://en.wikipedia.org/wiki/POSIX


Well, POSIX is pretty similar to what I mean, but it has a lot of low level stuff and I doubt that Microsoft has any ambitions to transform Windows into a POSIX compatible OS.

I thought more about a higher level standard like adding Python, Lua or Qt to every installation by default. As some of those things are pretty heavy I doubt that it would be a wise choice to include them in POSIX.

Just imagine a world were you could simply write a small python script which would start a complete GUI application on different platforms without any additional installation procedures. To my knowledge that is not possible today. AFAIK the only way today is to bundle the dependencies, but that has a lot of negative effects.


> I doubt that Microsoft has any ambitions to transform Windows into a POSIX compatible OS.

Windows NT already is, and has been for many years. It has had two POSIX subsystems over the years, and then a Linux subsystem.


Isn't that what electron is popular for?


Yes, that's what everybody uses electron for, because the use-case is obviously there.

But everybody who has build an electron app and a Qt/GTK app, will agree that the tool kits which are available to electron apps are not as sophisticated as Qt/GTK (not talking about the ton of downsides of electron apps (huge app size, outdated versions, etc.)).

I am completely pro PWA, but as far as I can see it it will take a few more years (at least) before we will get to a state which will allow us to use them in the same way we develop and use normal desktop apps nowadays.


I've been thinking about related issues.

Probably the first cut will be an "app bundle" format for Oil + busybox + arbitrary user utilities.

I'm more interested in the subset of busybox that is PORTABLE. busybox doesn't run on BSDs, because a lot of it is tied to the Linux kernel. It's more for embedded Linux.

I actually worked on the toybox project around when starting Oil (toybox is the "busybox" on Android, started by the former busybox maintainer Rob Landley.)

So I don't want to necessarily create another package manager, which is sort of implied by your question (?). For shell, the package manager is traditionally the system one -- "apt-get" on Debian, maybe homebrew on Mac, etc.

But I definitely want to solve the dependency problem, and I think the best way to do that is through some kind of support for app bundles. Of course you can also create a container image if you like.


bash is good enough for launching commands and short scripts. When you want to manipulate data that may contain special characters or write not trivial algorithms, it becomes insane. I think it is a feature. It indicates that bash is not the good tool for that. bash, sed, awk are excellent tools. I know all the basic stuff about them and I know when it becomes tricky. When it becomes tricky, I switch to python or perl.


The pain of "not quite bash" ultimately is what put me off xonsh as my daily shell. Also losing it whenever I SSH'd somewhere.

The latter problem could probably be solved with a wrapper which would pipe and execute the shell (or a bytecode interpreter ala shuttle?) automatically - but I've seen no alternative shell project take this part seriously for the problem space.


> Oil is taking shell seriously as a programming language, rather than treating it as a text-based UI that can be abused to write programs.

erm, there's a big difference between a command scripting language and a programming language. These should be treated as different things.

I have years of experience using both, and I really don't want to be doing shell tasks in a programming language and I don't want to write programs in a shell language. Those sorts of hybrids are almost always mediocre. Horses for courses and all that.

There's a reason bash keeps being used - it's mature, it's simple, it's easy and people are productive with it.


This keeps showing up on the front page. No doubt it will have users.

Lets say there are two uses of a shell: 1. interactive and 2. non-interactive (scripting).

Lets imagine the commandline user is learning about her OS. She learns it is heavily reliant on shell scripts to build and (if desired) to automate starting services.

She realises that to understand the OS she will have to learn the shell that the OS developers used for scripting.

Then she realises that if she chooses another shell for interactive use, she will have to learn two shells.

Finally she realises that any script she writes in the "non-interactive/scripting" shell will also run under the interactive one. But not vice versa.

If she only has enough time in life to master one shell, which one should she choose?

Over time I found I really cared more about the scripting aspect of a shell than the interactive facet.

The scripting shell used by the OS authors might be an Almquist derived shell, for instance.

Occasionally the ash Im using gets a new "feature" but not too often. I like that it stays relatively small. The latest "feature" is LINENO.

But I also use a smaller version of this shell with no command line history, no tabcomplete, etc. IMO, there is no better way to learn how to reduce keystrokes. It has led to some creativity in this regard for which I am thankful.

After "mastering" ash, I started using execlineb, pipeline and fdmove. I am starting to use more components of execline and am continually replacing ash scripts with execline scripts for more and more daily work.

I guess we will never see execline on the front page, which I think would be interesting because I would like to hear whatever harsh critique HN can muster.

Seeking a better non-interactive/scripting experience, I have experimented with many other shells over the years, and written simple execve "program launchers", but in this vein, I have not found anything that compares to execline.

The speed gains and resource conservation are obvious, but with the ability to do "Bernstein-chaining" and the option to use djb low-level functions instead of libc, it is a rare type of project.

The speed and cleanliness of the compilation process is, compared to all the other crud one routinely encounters in open source projects, "a thing of beauty". Humble opinion only, but I think others might agree.


It was submitted once.

* https://news.ycombinator.com/item?id=12600807

Laurent Bercot no longer has xyr page about the compilation process. I have since picked up some of the slack there. Although I don't go into things like the way that M. Bernstein avoided autotools.

* http://skarnet.org/software/compile.html

* http://jdebp.eu./FGA/slashpackage.html


Oil syntax looks pretty much like Tcl and Th[1] so the author could have probably just used Th. It has saner ways to copy arrays than

  b = [ @a ]
which pretty much looks like Perl with added line noise. Why are the [] even necessary when it's clear @a is an array?

1. http://www.sqliteconcepts.org/THManual.pdf


As a fish user, I didn't really get where the advantages lie as clearly. I am for developing new shells, we really need this, innovation is important.

As for new language, I feel like if you want to script things, you can use ruby or python, hell, perl will do and you could be fine. I don't want to be unfair to this effort, I just feel that it is not for me and I am tinkerer.


Applaud the idea, but you are fighting a ton of inertia. I have to wonder if drawing a more clear line of when to go to Perl, Python, Lua, Ansible, Golang, etc, might be more fruitful. Sometimes, a shell script solution is just drawing any kind of shell too far outside it's core competency.



Can you show me some code? How do you write this in Perl?

    f() {
      echo --
      ls /
      echo --
    }

    f > out.txt
    f | wc -l


Here is probably the simplest answer, it's not totally correct but it's the shortest answer that fits the main criteria.

  #!/usr/bin/perl
  
  use strict;
  
  sub f
  {
    my $outputFH=shift;
  
    print $outputFH "--\n";
    open(my $lsFH,"ls /|") or die("pipe ls: $?");
    print $outputFH (<$lsFH>);
    close($lsFH);
    print $outputFH "--\n";
  }
  
  open(my $outTxtFH,">","out.txt") or die("open: out.txt:$?");
  f($outTxtFH);
  close($outTxtFH);
  
  open(my $wcFH,"|wc -l") or die("pipe wc: $?");
  f($wcFH);
  close($wcFH);


I'm not sure why you'd want a count that includes your delimiter lines nor why you'd want to run the binaries twice for that matter. Real programming languages, including Bash, have variables.

There are a number of ways to do these same things. Some of them mirror your code more closely than others. Here's my first shot using a core module, since someone already did one with no modules that works much like your code.

    use IPC::Run3;
    my @lines;

    sub f {
        my @command = qw( ls / );
        run3 \@command, \undef, \@lines;
    }

    f();
    open my $out, '>','out.txt' or warn "can't write to out.txt : $!\n";
    printf $out "--\n%s--\n", (join '', @lines);
    print scalar @lines . "\n";


Now I'd make that a bit cleaner and more reusable of course. I'd probably take the commands to run from the command line or a configuration file. I'd probably return an array or use a reference to one rather than making a file-level lexical array and just using that from a subroutine.


I tried not to make it too golfish, but dispensed with niceties such as error detection and somesuch (which aren't there in the bash version - still autodie will catch most snafus). I also joined a few lines to make it closer to what's happening in the shell version. No doubt experienced golfers could make it tighter/shorter, but methinks that's not the point of the exercise.

    #!/usr/bin/perl -w
    use strict;
    use English;    
    use autodie;
    
    $OFS=$ORS="\n";

    sub f { my $h ; opendir($h,$_[0]) ; print "--",(readdir($h)),"--"; closedir($h);}

    my $out;

    open($out,">/tmp/out.txt") ; select $out ; f("/tmp");close($out);
    open($out,"| wc -l ")      ; select $out ; f("/tmp"); close($out);

    select STDOUT;
Would I use perl/python to write this kind of stuff? 'course not. Why would I go through the opendir rigmarole, if all I really need is 'ls'. But there are zillions of (non application) tasks where bash's syntax gets very quickly unwieldy (think filenames with blanks, quoting quotes, composing pipes programmatically, having several filehandles open at once...) while perl shines. And you can still throw the occasional

@ary=split("\n",`ls`);

around if you feel so inclined.


And just to be cute, this uses 2 pipes and is shorter (but I would not write it this way).

    #!/usr/bin/perl -w
    use strict;
    use English;
    use autodie;
    
    sub f { open(my $h,"/bin/ls $_[0]|") ; print "--\n",(<$h>),"--\n";}
    
    open(my $o,">/tmp/out.txt") ; select $o ; f("/tmp") ;
    open($o,"| wc -l ")         ; select $o ; f("/tmp") ;


Assuming we have some sequence of commands whose output we want to capture and eliminating any implicit use of the shell from perl, I’d define a sub along the lines of

    sub output_of {
      my(@commands) = @_;

      my $pid = open my $fh, "-|" // die "$0: fork: $!";
      return $fh if $pid;

      for (@commands) {
        my $grandchild = open my $gfh, "-|" // die "$0: fork: $!";
        if ($grandchild) {
          print while <$gfh>;
          close $gfh or warn "$0: close: $!";
        }
        else {
          exec @$_ or die "$0: exec @$_: $!";
        }
      }

      exit 0; # child
    }
Call it as with

    my $fh = output_of [qw( echo -- )],
                       [qw( ls   /  )],
                       [qw( echo -- )];

    while (<$fh>) {
      print "got: $_";
    }

    close $fh or warn "$0: close: $!";
If implicitly using the shell is acceptable, but we want to interpose some processing, that will resemble

    my $output = `echo -- ; ls / ; echo --` // die "$0: command failed";
    chomp $output;

    print "$0: lines = ", `echo '$output' | wc -l`;
This becomes problematic if the output from earlier commands collides with the shell’s quoting rules. This lack of “manipulexity” that we quickly bump into with shell scripts — that are otherwise great on the “whipuptitude” axis — was a common frustration before Perl. The gap between C and the shell is exactly the niche on POSIX systems that Perl occupies and was its initial motivation.

If all you want to do is redirect anyway, run

    system("{ echo -- ; ls / ; echo -- ; } > out.txt") == 0
      or die "$0: command failed";
Use the appropriate tool for the job. Perl was not designed to replace the shell but to build upon it. The shell is great for small programs with linear control flow. It’s hard to beat the shell for do-this-then-this processing. The real world likes to get more complex and nuanced and inconsistent, however.

Maybe I am missing your point entirely. Do you have a more concrete example in mind?


See this thread for the real problem: https://www.reddit.com/r/oilshell/comments/7tqs0a/why_create...

Sorry I got to this late -- I might do a blog post on it. I think your response, along with the 3 or 4 others i got essentially proves my point: "Perl is not an acceptable shell".


You're not saying why any of the above don't solve your problem. Dismissing a solution because you refuse to understand it doesn't prove anything.


I recently discovered Fish, which is actually not that recent, but also an interesting shell.


> Shouldn't we discourage people from writing shell scripts?

...people frequently ask this? Tip #21 of "The Pragmatic Programmer" states: "Use the Power of Command Shells."


The last time an OSH article was posted, the top comment was that someone should submit a patch to Bash that makes it refuse to work for scripts longer than 50 lines. It's a strong sentiment that keeps getting repeated (I don't agree wit it).


Manipulating processes, exit codes and output is idiomatic in shells and that's where they shine. I totally agree with you.

In a general purpose programming language there's a lot of overhead for doing the same things.

For maintainability, there are now linters for shell languages that can help making the job easier.


> linters for shell languages

Obligatory in case anyone hasn't seen it:

https://www.shellcheck.net/

Works as a web app or local tool.


Please note that copying and pasting commands from a web browser into a terminal can result in malicious code being executed.

A good idea to double check using a text editor.

http://thejh.net/misc/website-terminal-copy-paste


Sounds like sandboxing of pasted code until confirmation would be a good feature for a new shell ...


There are terminals that do this, I believe.


thanks for this


What I would really like to see happening in Oil shell is a way to run a strict subset of the bash language. Ideally with a tool to convert existing scripts in that subset.


I would love an interactive shell that applies a type-checker to my input before running it ... is that really an infeasible desire in 2018 ...?


What kind of shell can I run on a server without filesystem access that I can open to external untrusted users?


Restricted bash? You will need the cooperation of your SSH server to fully lock it down but it gives you the tools.

https://www.gnu.org/software/bash/manual/html_node/The-Restr...


What's your use case for that? If it doesn't have file system access, does that mean it can't run any programs?

I believe Oil will be able to do this, because the architecture is very modular. See the last point in the post using the LLVM / GCC analogy.

(This type of feature isn't a priority now, but I'm interested in hearing use cases.)


Why is it called Oil? That name is so loaded. I think it actually will give it less of a chance.


I explain the name here:

https://www.reddit.com/r/ProgrammingLanguages/comments/7qn14...

Bizarrely (to me), more than one person thought the name was a play on the company "Shell Oil". Is that the connotation you got from it?

That's unfortunate, but I think as people use it more, the name will take on a different connotation. Guido was fighting "Python == snake" for a long time too (it comes from Monty Python). There were a lot of people that said the name Python was stupid and you couldn't convince your boss to use a language with a name like that.


A slippery slope to a worse name? I thought it was pretty slick......

I jest.


I use fish but I hate it. My perfect shell is strictly POSIX sh but has a better interactive experience (better tab completion and typeahead like fish has).


Wasn't there a in-browser Javascript shell somewhere a while ago? It could fetch URLs and do cool stuff with APIs.


No mention of plan9 rc or Tcl so it is hard to believe it is really breaking the mould


Looks nice! Any reason why Apache was used as it's license ?


Why is Oil implemented in Python? IMO Python is a terrible language for writing programming languages.


I will address that in part 2 of the FAQ, but the short answer is:

1) I prototyped it in Python; the dependency on the Python interpreter will be removed [1]

2) Oil went through many implementation languages, and one incarnation was 2000-3000 lines of C++. But I realized I would NEVER finish that way. The goal is to be compatible with bash, which is a tall order.

3) Oil is heavily metaprogrammed. It's only 16K lines of Python, compared to 160K lines of bash, and it can run some of the most complex bash programs out there. [2]

It's more accurate to say Oil is written in Python + ASDL [3], i.e. somewhat in the style of ML.

[1] https://news.ycombinator.com/item?id=16277358

[2] http://www.oilshell.org/blog/2018/01/15.html

[3] http://www.oilshell.org/blog/tags.html?tag=ASDL#ASDL


If you expect the shell process itself to be doing a lot of CPU-bound work, then that might be a reason against using an interpreted language like python.

If you expect the shell process to need to make use of true thread-based concurrency, then that might be a reason not to use python.

Do we have either of the above expectations? What other reasons are there for python to be inappropriate?


Why?


Maybe because python is very slow?


For this to be relevant you'd have to explain that you expect the shell process itself to be doing a lot of CPU-bound work.


pypy gives decent speed, and RPython is quite impressive.

But I'm sure it is ease of prototyping and exploring the design space.


I was with the guy right up until the bile laden PHP hated started coming in as a justification.


(author here) I didn't intend to criticize PHP, and I don't think I did.

I said that you can't convince people not to use bash or PHP by writing posts on the Internet, which is true.

I also said that Facebook is replacing PHP, which is true. That's not a criticism of PHP. The fact that huge companies like Yahoo and Facebook can be started with PHP is amazing.

I think PHP is a good analogy for bash. It gets the core things right, and it gets a ton of work done. I like languages you can get work done in! That's why I use bash.

But both languages also evolved a lot of warts. That's inevitable when you have so many users. They have diverse needs, and you need to preserve backward compatibility, which leads to an awkward evolution.


Seems an interesting idea, but it's implemented in Python, which means it will never replace bash and probably not achieve any significant adoption unless they rewrite it in Rust first (which they should have written it in to begin with since they started in 2016).

The reasons for that are that shells must start very quickly (due to subshells, local ssh, etc.), be fast, have no complex dependencies since they are used to recover broken systems, be portable but also with full support for OS semantics and be written in a language that allows rapid development of robust software, none of which Python does well.





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: