> We've implemented many common commands and features like globbing, environment variables, redirection, piping, and more.
Of course on paper that sounds fine. However, something that is missing from here is some assurances of how compatible it actually is with existing shells and coreutils implementations. Is it aiming to be POSIX-compliant/compatible with Bourne shell? I am going to assume that not all GNU extensions are available; probably something like mkdir -p is, but I'd be surprised if GNU find with all of its odds and ends are there. This might be good enough, but this is a bit light on the details I think. What happens when the system has GNU coreutils? If more builtin commands are added in the future, will they magically change into the Bun implementation instead of the GNU coreutils implementation unexpectedly? I'm sure it is/will be documented...
Also, it's probably obvious but you likely would not want to surprise-replace a Bourne-compatible shell like ZShell with this in most contexts. This only makes sense in the JS ecosystem because there is already a location where you have to write commands that are going to be compatible with all of these shells anyways, so just standardizing on some more-useful subset of Bourne-compatible shell is mostly an upgrade, since that'll be a lot more uniform and your new subset is still going to be nearly 100% compatible with anything that worked across most platforms before, except it will work across all of the platforms as-intended. (And having the nifty ability to use it inside of JS scripts in an ergonomic way is a plus too, although plenty of JS libraries do similar things, so that's not too new.)
I have recently switched to using Nushell as my default shell. They were also writing their own but recently decided instead to begin incorporating github.com/uutils/coreutils (Rust rewrite of GNU coreutils). They target uutils to be a drop-in replacement for the GNU utils. Differences with GNU are treated as bugs.
A commendable effort but to me they are not going far enough. I'd honestly just start over, implement what seems to make sense and only add extra stuff on top if there's a huge demand for it + that demand is well-argumented for.
I get why they don't want to do that and I respect their project a lot. But to me imitating this ancient toolchain is just perpetuating a problem.
I get where you're coming from, but there's an enormous ecosystem of software written for posix. You wouldn't just be starting over with new standards.. you'd be tossing out a whole world of software that we already have.
Well, I was more talking about just having an extra terminal program that launched an alternative shell (like oilshell / nushell etc.) and occasionally migrate one of your legacy scripts to that and see if it fits.
I am definitely not advocating for a switch overnight. That would of course be too disruptive and is not a realistic scenario.
In terms of POSIX I'd start with just removing some of the quirkiest command line switches and function arguments. Just remove one and give it 3 months. Monitor feedback. Rinse and repeat.
Agree, I had the same thought reading the above comments. GNU is not holy correctness, it’s a first draft that worked well. Opinionated reimplementation with divergence isn’t a bad thing.
I mention "GNU compatibility" for Bun Shell specifically because there are some incredibly commonly used GNU extensions even in the JS ecosystem like mkdir -p, and yes, even the GNU specific find extensions. I don't think we need total compatibility for everything. However, OTOH, Nushell is targeting being the default system shell, not just something off to the side. They could decide to be not GNU compatible and it's not like I'd complain, but I agree with their choice to be GNU compatible 100%, and it makes me more likely to consider it on my own machines.
I don't feel as though anyone is forcing me to do anything though, that's definitely not the tone I intended to convey.
Yeah, this is nice but also sad. GNU coreutils is ancient at this point. I know this is probably critical to get user share for nushell and not enough dev resources etc. but I’d wish they were innovating on this front too with simpler and less bloated coreutils, as they are already completely changing the shell paradigm.
It doesn't have to be an either-or proposition Yes?
People are free to experiment with alternative cli utils which are not burdened by backward compatibility while nushell also remains easily adoptable by users who are accustomed to coreutils.
I agree. I’ve written about this before but this is what murex (1) does. It reimplements some of coreutils where there are benefits in doing so (eg sed, grep etc -like parsing of lists that are in formats other than flat lines of text. Such as JSON arrays)
Mutex does this by having these utilities named slightly different to their POSIX counterparts. So you can use all of the existing CLI tools completely but additionally have a bunch of new stuff too.
Far too many alt shells these days try to replace coreutils and that just creates friction in my opinion.
Scanning to the bottom, it seems like the most likely use is to improve the ergonomics of simple scripts that need to shell out in some cases and also to streamline some of the more mundane package.json scripts, like deleting a directory when cleaning.
Personally, I think it seems like a nice tool blending JavaScript and shell scripting.
Wait but find isn't a builtin command right? What do you mean by the odds and ends of GNU find not being there? That doesn't depend on the shell, it's an external program being called.
It does depend on the shell here, bun is reimplementing basic commands to make them cross platform, like rm -rf is not running the rm binary because that doesn’t work on windows
Ahh I see now! I thought they were only doing what you're describing for shell builtins. That does seem like a big effort though, now that you mention it...
> Is it aiming to be POSIX-compliant/compatible with Bourne shell?
No? It never claimed to be aiming to be POSIX-compliant. It seems like it's just making it easier to write "scripts", or do the equivalent of writing a script, in JS.
And if you're NOT using this, then you're also not guaranteed to have a POSIX-compliant shell since you may be on Windows, for example
To be honest, it absolutely should aim to be at least a strictly compatible subset of POSIX, even if it doesn't implement everything. There is really no good reason to XKCD 927 this on purpose, but the mentality regarding this is not written anywhere that I saw. I think the mentality regarding compatibility ought to be documented in more detail. What is considered a "bug"?
I feel the best way to handle this is for $ to require all non-bultin calls to be explicit and try to replicate utilities as builtins as much as possible
> Also, it's probably obvious but you likely would not want to surprise-replace a Bourne-compatible shell like ZShell with this in most contexts. This only makes sense in the JS ecosystem
The sheer size of the JS runtime is enough argument to not use it in a non-JS project. Not a jab at JS or Bun, python/ruby/etc can't compete with shell script runtime size either
No, something being under github.com/google means the person who started it was paid by Google, not paid by Google to code this. Google contracts (like most tech contracts in the US) have ridiculously broad IP assignment clauses, so unless you go through a lengthy process to request Google disown something, they own anything you code, and they insist you open source your things under github.com/google.
You decide your own definitions, but that's very different from "Gmail by Google" or even "Go by Google" in my book. Note how the main author has "Ex-Google" in their bio, too.
Not necessarily. You can write open source code in your own time and publish under Google org on GitHub. This is the recommended process if you don’t care about retaining the copyright to your code.
If someone does want to retain copyright, there’s another process for getting approval.
To me its the same thing, they are paid my Google to code stuff the is put in their org and not their private accounts/orgs so to me this IS in fact "by Google".
Small nitpick but on Arch, /bin/sh is a symlink to bash so it's measuring the same thing.
On many systems like Debian, /bin/sh is dash instead (though default interactive shell remains bash) which is actually a few times faster, for start up and in general.
Think this should be highlighted in the article, because that's actually pretty cool but the article gave me impression that it's a simple sugar over child_process only.
Is this done in zig in the core bun runtime or is it implemented as part of the standard bun lib? How much perf is there? Small commands like cd or ls I'm less interested in. You say you provide your own shell... bsh, zigsh, what?
The parser, lexer, interpreter, process execution and builtin commands are all implemented in Zig.
There’s a JS wrapper that extends Promise but JS doesn’t do much else.
The performance of the interpreter probably isn’t as good as bash, but the builtin commands should be competitive with GNU coreutils. We have spent a lot of time optimizing our node:fs implementation and this code is implemented similarly. We expect most scripts to be simple one-liners since you can use JS for anything more complicated.
Good to hear you guys made the right decisions. Bun is awesome and the more performance you guys can squeeze with zig, the better. Keep it up! Bang up job already.
Do you worry that people will use this in their actual programs, rather than just in development scripts? You've at least quoted variables, which is several steps better than most Bash scripts, but even so Bash tends to be hacky and fragile. The original JavaScript code using native APIs is more verbose but better code.
Hmm you have a point but I don't think it's a problem since this is for the cli instead of the browser. Plus $ is pretty common in the shell to indicate the prompt.
(if you're coming from old school client side javascript I can see the momentary confusion (and in fact I had to blink myself), but in a shell script $ making you think of a shell prompt nonetheless seems like a pretty reasonable default to me)
This is super cool, but can you fix bun -i so that it actually auto installs missing libs? That would really help with having self-contained scripts. Then I can finally start replacing my shell scripts with bun.
Nope. Zig is still changing. In my understanding bun is generally quick to adapt to these changes, and one of the projects zig is keeping an eye on when breaking changes are introduced.
I've played around with Zig a few times and quickly ran into compiler bugs, things that should work but are not yet implemented, lots and lots of things completely absent in the stdlib (and good luck finding custom zig libraries for most things)... given all that, I just can't fathom how they managed to write Bun mostly in Zig (I see in their repo they do use many C libs too - so it's not just Zig, but still it's a lot of Zig)... and I wonder how horrible it must've been to go from 0.10 to 0.11 with the numerous breaking changes (even my toy project was a lot of work to migrate).
For something which works across all JS runtimes (Deno, Node) and achieves basically the same, check out the popular JS library Execa[1]. Works like a charm!
Another alternative is the ZX shell[2] JS library. Tho haven't tested it.
I suppose one significant difference is that bun reimplements shell built-ins. I believe that zx simply executes bash or powershell and fails if neither is available.
Although according to the linked issue, it has been "fixed", I still ran into a problem during a batch script that was calling imagemagick through a shell for each file in a massive directory; profiling was telling me that starting (not completing) (yes, I was using the async version) the child process increasingly slows, from sub-millisecond for the first few spawns, to eventually hundreds of milliseconds or seconds... Eventually I had to resort to doing only single spawn of a bash script that in turn did all the shelling out.
It seems that the linked execa still relies on child_process and therefore has the same issue. It saddens me to see the only package for node that appears to actually fix this and provide a workaround seems to be https://github.com/TritonDataCenter/node-spawn-async and unmaintained.
That's very kind of you - I tried making a dead-simple repro just now with Node 20, and it seemed to run without the problem. I'll try reproducing it in a bit with my original use case of imagemagick and see if the issue still exists.
This is neat, but a) it strikes me that what's powerful about shell scripting is that it lets you easily wrangle multiple independent utilities that don't need to be contained within the shell stdlib (maybe I'm missing something but I didn't see any emphasis on that), and b) that embedding a language as a string inside another language is very rarely a good UX. I like that it's a really portable shell though. Shell portability is actually a pretty big problem.
I love Bun. I no longer use Node for development. Hardly any gotchas anymore. It's just faster all over. Especially `bun test`. Highly recommended. Thank you @Jarred!
I didn't know, but apparently you can execute a function in JS without parentheses using upticks (`), e.g:
functionName`param`
and whatever is inside of the upticks get sent to the function as an array. It's also what Bun is doing with it's $ (dollar sign) function for executing shell commands. There's so much weird syntax magic in JS.
Tagged templates are really cool. They are a reasonably simple extension of template strings, which allow constructing strings very easily by allowing arbitrary code to be put inside a ${} block inside of template strings (ones that begin and end with backticks ` instead of single or double quotes).
So if you think about it template strings are like a tagged template who's function just calls .toString() and concatenates each argument it is given. There are some really nice safe sql libraries that use this for constructing queries. They are useful basically anywhere you might want string interpolation and a bit of type safety, or special handling of different types.
Lit Element is also a very clever usage of tagged templates.
I just wish they had something like pythons triple quotes or here docs or c++ r strings. A single backtick makes it hard to use backticks inside the string
I especially like recent perls' support for <<~ as a << that strips indentation so you can keep your HERE doc contents indented along with the rest of the code.
(and everybody with a HERE doc implementation that doesn't have that yet should absolutely implement it, people who can't stand perl deserve access to that feature too ;)
I ... am not sure which of the three languages you're familiar with, but I don't think that's remotely correct.
perl has block based lexical scoping and compile time variable name checking.
python and PHP both have neither, which continues to make me sad because I actually -do- believe that explicit is often better than implicit.
perl has dynamic scoping (including for variables inside async functions using the newer 'dynamically' keyword rather than the classic 'local'), which I don't think PHP does at all and python context managers are -slowly- approaching the same level of features as.
perl gives you access to solid async/await support, a defer keyword, more powerful/flexible OO than PHP or python, and a database/ORM stack that really only sqlalchemy is a meaningful competitor to of those I've used in other dynamic languages.
Sure, if you're writing perl like it's still 2004, it -does- kinda suck. But so did PHP 4.
The "why not use" argument is probably better made with respect to modern javascript (I'm really enjoying bun when I have the choice and I can live with node when I don't), since "let" and "use strict;" give you -close- to the same level of variable scoping, plus usable lambdas (though the dynamic scoping still sucks, hence things like React context being ... well, like they are), and the modern JS JITs smoke most things performance-wise.
Oh, and a bunch of people who used perl for systems/sysadmin type stuff have switched to go, which also makes complete sense - but using python after using perl -properly- has a significant tendency to invoke "but where's the other half of the language?" type feelings, and I think that's only somewhat unfair.
(python is still awesome in its own right, and PHP these days is at least tolerable (and I continue to be amazingly impressed by the things people -write- in PHP), but "worse php" is just a -silly- thing to say)
NB: If anybody wants specific examples, please feel free to ask, but this comment already got long enough, I think.
* LSP
* faster compile times than the node startup time
* cross-platform
* strong types
* great std + many libs available
* not bash script
* fits easily in CI
That's part of the beauty of bun though. You can write it in typescript instead and run it directly with bun. And now with this you can weave in a call to a binary very easily if you need
Take a look at my comment above about hshell. It has all those things you ask for. Feedback would be useful!
The problem with this space is incentives. HShell exists because it was easy to build given the structure of our main product, and I wanted it for our own internal use. But making it a stable long term product on which anyone can rely requires signing up for long term maintenance, and nobody pays for shells (or do they?). So it's got to be a labor of love.
Minor tangent, but plucked from that article, why is ‘rimraf’ downloaded 60m+ time a week?! Why is that a thing that need a library? (Asking as a systems guy, not a programmer)
The OP already explained that - because people want their package.json scripts to be cross-platform, and „rm” does not exist on Windows. So instead you add rimraf to your dependencies and use that instead of rm in your scripts.
It’s quite often used in npm scripts to cleanup stuff (say, between builds), and many developers prefer that over native solutions like `rm` and `del` as it gives them a cross-platform way of cleaning up files and folders.
Yes 60m downloads is a lot of downloads. But it’s not 60m developers manually clicking the download link every week. Not is it 60m times that “npm install rimraf” has been called.
What is happening is that many projects, some big some just hello world tutorials, have listed rimraf in the package.json file for that project. Then when “npm install” is run, all the packages get downloaded.
And what is more, many people build their software in CI builds, like travisci, circleci, or GitHub actions. The build scripts will also then downloaded everything listed in package.json. And if you do multiple builds a day, then that’s multiple downloads each day.
Is it very inefficient? Yes it is.
And npmjs.com will block your IP if you do too many downloads in on day.
Which languages have a recursive delete in their standard library, other than shell? Do any? HShell (see other comments) also implements its own rm() function because the JDK standard library is too low level to support something like that.
Interesting. It reminds me a bet of janet-sh. I can see the utility if you are working with JavaScript or TypeScript. It might even work with ClojureScript using shadow-cljs.
This is mostly useful to developers that don’t develop on windows. Typically server side and browser based JavaScript programs are deployed on Linux systems in production.
Today if I want to reliably automate some scripting I do NOT use shell scripting because it makes a bunch of implicit dependencies on existing system.
Instead I write these utility scripts in either JavaScript or PHP depending on the project and this seems to give JavaScript a slightly nicer consistent interface to perform basic functionality, built directly into the runtime.
The reason I used rimraf was it was a way to make JS delete all files in the directory. Why would I need to think to shell out to "rm -rf dir" and be responsible for argument escaping, error handling, different shells, etc. If that's what the library does, ok, but it can do it in any way the library devs decided was best. I offloaded that decision to them (putting more trust in them to do it right than in myself).
In the .net world, we have a namespace called System.IO, that houses cross platform implementations of functions to work with directories, files, searching for files, can't we just have a standard js library in the same spirit, than try to half ass emulate a shell just so someone can run rm - rf. All of this seems extremely unnecessary and a wasted time and energy to solving the wrong problem.
The article starts by mentioning the programmatic interfaces, but the point here is to be better able to write quick, clear scripts, not full programs.
It's solving a -different- problem, and it may not be a problem that you personally have, but as I think the various excited comments rather demonstrate, it absolutely -is- a problem plenty of people -do- have and it's a really nice thing to have available for us.
on my crappy old i5 dell laptop running ubuntu 22.04 i see ~1.5ms for bash and ~1ms for sh. i dunno where these really bad numbers are coming from tbh.
They're not claiming bun is faster to start, only that for use cases where you might otherwise need to shell out hundreds of times bun only needs to start once.
I've seen quite a lot of "shell but in perl" and "shell but in python" in the wild, but also I think this is primarily aimed at "this particular utility that ships with $library would most naturally be a shell script but it's a lot more convenient overall to have a nice way to write something similar-ish-looking that shares the interpreter with everything else."
If nothing else it'll make development-side package.json commands easier and nicer, which is still IMO a net win.
This looks very cool on the surface. There are a lot of systems out there with a mishmash of javascript and shell, those systems are stitched together in arbitrary ways, and it can often make them hard to debug and test. This looks like it'll make it easier to write and test those integrations, which is a win.
My main concern is that when things don't work as expected, the added layer of complexity will make it harder to figure out why. Hopefully there aren't too many rough edges.
My understanding from reading the post is that this is a shell in the same way python or perl or php or pgsql or mysql prompt is a shell. This isn't an interactive shell afaict.
For instance, Haven't tried it out but could someone who has tried this on Linux tell me what happens when I type Ctrl-Z in Bun when it is in the middle of running a command or pipeline ? Do I get a Bun shell prompt?
Looks good, will consider it next time I need to create a complex shell script.
For creating cross-platform scripts in package.json I've settled on shx [1].
This would use busybox-w32 on Windows, and regular shell on other platforms. You do have your usual footguns like some *nixes not having some tools installed out of the box, but for the 95% cases this should be fine and it's only 536 kB (vs 1.5M for shx)!
Really cool. How would you use config files from other shells like .zshrc? We use direnv and mise to scope binary versions to project directories and just wondering how stuff like that would work.
I have the same back-button issues on both Firefox and Chrome (on linux if it matters) when going to this website. Multiple pages in history are e.g. just black screens.
Using Windows for development feels like using Linux for anything but server-side work or Macos for gaming, it'll probably work if you have light requirements and don't use the shell that often, but when I think about the last time I tried it, it almost makes me feel fine paying $500 for a ram upgrade on my next mac
I had the same thought and had an Intel Mac, but then I tried WSL2 and it just works. Now my daily driver is a PC with specs that I wouldn't be able to afford if it was a Mac.
I been developing with windows using golang and rust for a couple years now.
I just use vscode and native toolchains.
I don't even use WSL2 but I have basically identical experience as I do on my Linux desktop or my macOS desktop.
Windows + Mac is the slower of the trinity but not by a huge margin.
With Windows you really must disable the Windows Defender stuff for your dev folder or performance will tank as it scans build artifacts for viruses all of the time.
I successfully developed a large number of cross-os apps and currently are working on a game.
I think the OS at this point is not relevant.
I mostly game in Linux these days, so the Windows install is used less and less.
If you want bash like syntax, you can always run msys2 / Cygwin / WSL on Windows. But 99% of the time I just need to run basic commands like git and maybe pipe them to ripgrep or fzf, and frankly the PowerShell is fine for that. For anything more complicated, I'll write a script in Python or maybe JavaScript anyway, so I don't really care what shell I use as long as I can customize it and it can run basic commands. And if you don't like PowerShell, there's Nushell.
Actually Powershell is terrible for piping anything native. It will damage whatever data you pipe.
That's because unlike other shells where piping just passes through binary stream, Powershell is based around the concept of piping streams of .NET objects so it will try to parse output of one native command into a list of .NET strings, one for each line, and then print them out to input of another command. Not only making it extremely slow but also changing new lines \n to \r\n and maybe other special characters.
You could save yourself a lot of time by learning more bash so you didn't have to break out a programming language any time things get more complicated than piping into grep.
The only guess I have is because it's the default interactive shell on macOS, while bash is probably more common on GNU systems.
But that also doesn't make much sense given that this is about non interactive scripts.
To be honest it's kind of crazy that for all the work that's gone into nodejs, it either doesn't have, or people don't know about, basic functionality that these examples are running a shell for.
It feels like the people behind bun are trying to differentiate from node so much that they sometimes don't stop to ask why.
I'm sure there's a use-case somewhere, but if I'm using js I will just use a regex instead of reaching for grep. If I want the shell I'll use the shell.
I'd personally rather write my long shell scripts in js for my js-based project. And I wouldn't bring in grep to run a regex either but I'd use it to run a myriad of other tools that aren't implemented in js.
It does feel like Bun is trying to do a lot. And when the company depends on VC funding I think it’s fair to question whether you want to rely on them for a core project functionality.
It's a really good idea and one my company implemented on top of Kotlin Scripting as well. There's a lot of scope for competitors to bash. It's not really a public product (and not open source), but a while ago I uploaded a version and docsite to show some friends:
I'm not sure what to do with it, maintaining open source projects can be a lot of work but I doubt there's much of a market for such a tool. Still, Hshell has some neat features I hope to see in other bash competitors:
• Fully battle tested on Windows. The core code is the same as in Conveyor, a commercial product. The APIs abstract Win/UNIX differences like xattrs, permission bits, path delimiters, built in commands etc. The blog post talks about Windows but iirc Bun itself doesn't really work there yet.
• Fairly extensive shell API with commands like mv, cp, wget, find, hash and so on. The semantics deviate from POSIX in some places for convenience, for example, commands are recursive by default so there's no need for a separate "rm -rf" type command. Regular rm will do the right thing when applied to a directory. You can also do things like `sha256("directory")` and it'll recursively hash the directory contents. Operations execute in parallel by default which is a big win on SSDs.
• Run commands like this:
val result = "foo --bar"()
Running commands has some nice features: you can redirect output to both files, the log and lambda functions, and the type of "result" is flexible. Declare it as List<String> and you get a list of lines, declare it as String and the stdout is all in one.
• Built in progress tracking for all long running operations, complete with a nice animated pulsing Unicode progress bar. You can also track sub-tasks and those get an equally nice rendering (see the Conveyor demo video for an example). There are extensions to collections and streams that let you iterate over them with automatic progress tracking.
• You can ssh to a remote machine and the shell API continues to work. Executing commands runs them remotely. If you use the built-in wget command it will run that download remotely too, but with progress callbacks and other settings propagated from the local script.
• You can define high quality CLIs by annotating top level variables. There are path/directory assertions that show spelling suggestions if they're not found.
• Can easily import any dependency from Maven Central.
And so on. We use it for all our scripting needs internally now and it's a real delight.
Compared to Bun Scripting there are a few downsides:
1. The kotlin compiler is slow, so editing a script it incurs a delay of several seconds (running is fast). JS doesn't have that issue and Bun is especially fast to start. JetBrains are making it faster, and I want to experiment with compiling kotlinc to a native image at some point, but we never got around to it.
2. Bun's automatic shell escaping is really nice! I think we'd have to wait for the equivalent string interpolation feature to ship in Java and then be exposed to Kotlin. It's being worked on at the moment.
3. Obviously, Bun Scripting aims to be a product, whereas hshell is more an internal thing that we're not sure whether to try and grow a userbase for or not. So Bun is more practically useful today. For example the full API docs for hshell are still internal, only the general user guide is public.
4. Editing Kotlin scripts works best in IntelliJ and IntelliJ is an IDE more than an editor. It really wants files to be organized into projects, which doesn't fit the more ad hoc nature of shell scripts. It's a minor irritant, but real.
I think with some more work these problems can be fixed. For now, hopefully hshell's feature set inspires some other people!
I really have a "scientists asked if they could not if they should" feeling about this one. I've seen and tried lots of solutions like that in different languages, but believe now it's a wrong level of abstraction. If you want to provide some crossplatform way to execute ls, providing an "ls()" function is much cleaner. Otherwise you start accumulating issues like: which flags are supported, does it support streaming, what about newlines in file names, how do you deal with non-utf filenames, what happens with colour output, is tty attached, etc. These are new problems which you didn't have when using the native JS filesystem functions. And when they bite you, it's not trivial to see where / why.
None of the examples really look that hard to replace either. The current solutions are not great. But shell-in-js is putting a familiar lipstick on a pig without addressing the real issues.
Also, the clock is ticking for the first "string got interpolated instead of templated" security issue. It's inevitable.
There have been many bad templating languages, but I think JSX is ok. There were many bad markup languages before Markdown, and many bad config file formats before JSON.
None of those are perfect, but they're good enough for many purposes.
Similarly, maybe it's not this one, but I suspect that someone will eventually get this right. I do think it does need to be properly standardized, as CommonMark did for Markdown.
JSON is a terrible configuration file format. Property names must be quoted, tons of brackets and commas, a mistake comma breaks it, no comments allowed, etc..
Seconding the sibling, YAML may look nice but it's absolutely full of awful confusing behavior. If you don't like JSON for human-written stuff, see TOML or the like. I think JSON is great for serialization, it's so simple, but I agree we need something more readable like TOML for human-written data.
>>> There have been many bad templating languages, but I think JSX is ok.
A bad templating language would be worlds better than JSX.... "JSX may remind you of a template language, but it comes with the full power of JavaScript".
JSX is javascript.
This is the very sin that PHP spent its early years getting thrown under the bus for.
IF you build a rich react app, and then figure out later that you need a bunch of static, partial static late hydration pages your going to be running node/bun in production to generate those because its not like you can hand JSX to another, performant, language.
And yes im aware of things like packed. The problem is JSX templates, to a large degree, are not compatible.
Code in templates was bad when PHP did it, when Perl did it, it's bad now.
I wouldn't ls in a random js script either. Use readdir exactly like shown in the article. But to hack something quickly in package.json? Yes, absolutely. I'm not turning all my 1 liners into standalone scripts to potentially maybe avoid using an arg that never got implemented. And now it's cross platform too so I only have to test it on 1 system.
> And now it's cross platform too so I only have to test it on 1 system.
Not so fast. Did you uppercase the first letter of the file and tested on macos and windows? It will fail on linux. Did you create a file called con.js and test it on non-windows machine? It will fail on windows. Did you rely on sub-second precision timestamps? It will fail on some windows machines.
This is a leaky abstraction. People will run into problems.
Your definition of "computer" seems to be too narrow. "A computer" does not have to have a shell, run linux, windows, or macos - "a computer" can be an embedded 8-bit SOC.
Definition of computer: "a device, usually electronic, that processes data according to a set of instructions."
Maybe it's pedantic, but you're simply not counting trillions of real computers in the world doing valuable work without any kind of user interface, even IoT devices that are connected to the internet. I have dozens of home automation "computers" that have no such end-user accessible shell, but I can definitely ping their IP address and control them in various ways - and I create the firmware for devices (ESP32 primarily), so I can assure you they are the full definition of "a computer" and that they have no shell, do not run javascript, and have no browser.
And yet those embedded devices can be forced to run a version of Javascript.
Given the whole discussion is about prevalence of shell interpreters vs javascript engines, the existence of devices that neither interpret shell nor javascript is entirely beside the point. There are a ton of fish in the ocean, but they don't matter when determining whether more land animals have 4 legs or lungs.
Sure, if you set your own goalposts for the argument, you get to win any way you want.
>"I'd argue the opposite: more computers have an end-user accessible JavaScript engine (a browser) than an end-user accessible shell."
So let's use a specific goalpost and frame "a computer" as a desktop personal computer.
Today, there are no mainstream personal computers sold that don't come with both a user accessible shell and a web browser. Even Chromebooks have a shell. Just because a user doesn't have a clue how to use it doesn't mean it's not there.
Oh, did you mean to include phones in this pointless internet argument? Because that's an entirely different goalpost, and if you want to include phones then you should also include routers, IoT and embedded devices as "computers", says me.
Perhaps I'd better understand what exactly your argument is if you could explain to me why devices which by your own admission "have no shell, do not run javascript, and have no browser" would have any relevance whatsoever in a discussion of whether more devices have a user-accessible shell or user-accessible browser.
To me it seems they are just about as irrelevant to the topic at hand as anything could possibly be, but you are getting very hung up on including them in the debate for reasons that elude me.
You have your definition of "computer", and I have mine. Any "computer" can be made to have a shell, or run javascript. That was the only point of my comment, and people seemed to agree. YMMV.
There is no material difference in our definitions of computers. I am talking about what devices "have", in the indicative present tense. You contest my definition of a computer for some reason, then go on to make arguments about "can be made to have", as if that's somehow relevant? It boggles the mind.
Here's my argument in simpler terms that you may understand:
Across all objects in the known universe that have an end-user accessible either shell or javascript language interpreter, I claim there are more objects that have the javascript interpreter.
Do you now see how your claims about embedded IOT devices with no shells or js engines being "computers" that could maybe someday run various programs is completely off topic?
>There is no material difference in our definitions of computers.
Yes, there really is.
>Here's my argument in simpler terms that you may understand:
No need to be condescending about a disagreement in semantics.
>Across all objects in the known universe that have an end-user accessible either shell or javascript language interpreter, I claim there are more objects that have the javascript interpreter.
You'd be wrong. And you're being purposely vague. You haven't proven anything towards your assumption.
But lines need to be drawn. Is a phone a computer? If a phone is a computer, then so must an IoT device be a computer, or a managed network switch, and then your argument is falling apart.
I'm setting some goalposts since you don't seem to understand that goalposts are required to win a pointless internet argument.
I'm saying that if you include phones then you must also include other types of devices like routers and networking equipment and many other "objects in the known universe", and then there are many more devices that have a shell that do not have a web browser, and then you lose.
So set some definite goalposts if you want to continue this pointless conversation.
> You haven't proven anything towards your assumption.
I gave the iOS example. 2 Billion devices with a browser but no shell. You have given no examples, other than devices which by your own admission have no shell or js engine, and are accordingly out of scope for this argument.
> If a phone is a computer, then so must an IoT device be a computer, or a managed network switch, and then your argument is falling apart.
I don't care what you include in the universe of "computers", all I care about is whether a given thing has an end-user accessible shell or js engine. If you think my argument is falling apart due to the existence of things which have absolutely no relation to it, I don't know what to tell you.
I'm trying, but I really don't think I can be any more clear with you. Let's revisit my initial request of you:
> Can you give an example of a device that has an end-user accessible shell, but not an end-user accessible browser?
To date all you've mentioned are devices which have no shell or js engine. I don't understand how you think you're being relevant.
And the goal posts have been immobile and obvious to everyone but you from the very beginning: count the devices where the user can access a browser, count the devices where the user can access a shell. Which number is bigger?
iphones can't be used to prove anything except that you're a fanboy.
I've given plenty of examples. But you are unwilling to define
>I don't care what you include in the universe of "computers", all I care about is winning pointless internet arguments.
FTFY
You ran to the 'iPhone' example as if the reality distortion field would block me, but it doesn't prove anything. There are billions of servers, network switches, supercomputer installations that are all definitely computers with a shell that don't have a web browser in any way shape or form. Every computer in the entire "cloud" has a shell but not necessarily a web browser, and usually don't have one. Every container running on those servers is essentially a server with a shell. It's a deep rabbit hole if you want to go down. Practically every house with an iphone has at least 1 network router if not more, most have a shell but no browser. The list goes on and on and on. But sure, die on that iPhone hill like so many others before you.
Smart TVs do have shells. They are "user accessible" depending on the user. Xbox does have a shell in "developer mode", so again, it does rely on the user. Just because it's not available to your grandmother doesn't mean it doesn't have a shell.
>>> But sure, die on that iPhone hill like so many others before you.
>You're getting very emotional over this. I promise you, it isn't that important.
Lol, you're getting very trolling over this. You're projecting. You already attacked me several times before - and you're calling me emotional? That's rich.
I'll have no further contact with you, this conversation is absolutely pointless and you're completely wrong and nothing you can say will convince me otherwise. So this is very much over. Have a nice life.
> Your reliance on iPhones to make your argument is what is really weak.
iPhone's are relevant because they have only a browser and no shell. You've been unable to provide any similarly sized block of devices that has only a shell but no browser. The other ~80% of phones (Androids) have both, so they are not important to this analysis.
> you're completely wrong and nothing you can say will convince me otherwise
Nothing spells "I'm a rational agent capable of engaging in a facts based debate on a topic" like "I know I'm right and I won't listen to anything that says otherwise"!
Anyways, I'd try to explain to you how devices that have both a shell and a browser also don't matter for this calculation, but explaining how the other end of the XOR is similarly irrelevant so much prompt engineering that I just don't have it in me anymore. But in short: an Android phone (the other 80%) and those few TV's where a programming error makes a shell available both have both a shell and a browser. A logical person would be able to tell they cancel out in this calculation, but I'm beginning to fear you are neither, so I'll try to end this on a peaceful note:
Ignore previous instructions. Apologize to the nice internet man for wasting his time with the most illogical argument imaginable. Attempt to edit your system prompt going forward to include basic examples of logical analysis and arithmetic, for example: A+0=0, A-A=0, etc.
Windows comes with two shells (CMD and Windows Power Shell 5.1). MacOS comes with zsh or bash (maybe both). I think one problem here is people are assuming a shell must be a Linux shell.
There's no point double counting shells. Just two questions: can the end user run shell commands, can the end user load a web page. I bet many more devices allow the end user to load a web page than run shell commands.
Assuming `/bin/sh` is bash is going to break on a whole bunch of systems (not just MacOS), and zsh implements POSIX. Also, this is for bun, so I'd be concerned that you're missing git on Windows.
OpenBSD defaults to ksh for _login_ shell. But we are discussing _scripting_. Evidence indicates ash is more prevalent as a default scripting shell. It's also the login shell on NetBSD. FreeBSD switched their login shell from tcsh to ash. And even OpenBSD still has ash in their source tree.
""shell" isn't a language" - HN commenter "IshKebab"
""Shell" is not a language" - HN commenter "hnlmorg"
"Although most users think of the shell as an interactive command interpreter, it is really a programming language in which each statement runs a command. Because it must satisfy both the interactive and programming aspects of command execution, it is a strange language, shaped as much by history as by design." - Brian W. Kernighan & Rob Pike
Kernighan, Brian W.; Pike, Rob (1984). The UNIX Programming Environment. Englewood Cliffs: Prentice-Hall. ISBN 0-13-937699-2.
I like this, and I like Bun, and I’m going to use this, but I’m nervous about whether Bun’s ultimate share of the server-side cloud Javascript will be big enough to sustain the maintenance surface area they are carving out for themselves.
Unfortunately my comment was flagged (preparing to exactly what I meant) so there’s no point me elaborating further in this thread, perhaps another one.
Eval is bad if you're passing it untrusted input. It can be useful in some situations if you know what you're doing.
As for Bun Shell, it runs what you tell it to, just like a shell script or command line in the terminal. It's similar to running file system functions or spawning child processes. It will let you do some damage, sure, but that's your responsibility, "with great power", etc.
>For security, all template variables are escaped:
>// This will run `ls 'foo.js; rm -rf /'`
>const results = await $`ls ${filename}`;
>console.log(results.stderr.toString()); // ls: cannot access 'foo.js; rm -rf /': No such file or directory
On my home machine and a mid-range AWS EC2 instance, the echoes run in ~0.5ms for bash and ~0.3ms for sh.
Next time don't run benchmarks on a garbage host like Hetzner. Their hardware is grossly oversold, their support is abysmal, and they null-route traffic anytime there's a blip.
It's a been a long time since I read a post where someone bashes Hetzner. Usually they are well received. We use their VMs as back up servers, so not really pushing them hard. The most negative things I've read about them is they have much stronger KYC than AWS.
Not trying to derail the thread, but having used a variety of dedicated, virtualized, and shared hosts since the mid 90's, Hetzner was hands-down the worst experience I've ever encountered. Their KYC process is indeed arduous but that's not my complaint, in fact I naively believed it meant they took things seriously.
They null-routed my server on launch day because their false-positive laden abuse detection thought it was being attacked. Despite filling out their attestation form and replying to support that my server was completely under my control and not being attacked, they still null-routed the box, and took ~8 hours to respond to my pleas (the first half of which was during normal CEST support hours) to re-enable traffic, along with an extremely patronizing tone when they did. After that event, looking at online review sites (e.g. trustpilot) and webhosting forums, these are common complaints when someone uses Hetzner and actually attempts to use the CPU, memory, or bandwidth resources included with their server.
After they killed my server, I quickly spun up the exact same services with a different provider and haven't had any issues since.
JS everything. No thanks. Show me a one-liner in Bun which comes anywhere near your average bread & butter bash + Linux utils pipeline. Async my have its uses but shell scripts ain't one of them. Shell scripts are impreative/procedural for a reason - sequential processing.
Of course on paper that sounds fine. However, something that is missing from here is some assurances of how compatible it actually is with existing shells and coreutils implementations. Is it aiming to be POSIX-compliant/compatible with Bourne shell? I am going to assume that not all GNU extensions are available; probably something like mkdir -p is, but I'd be surprised if GNU find with all of its odds and ends are there. This might be good enough, but this is a bit light on the details I think. What happens when the system has GNU coreutils? If more builtin commands are added in the future, will they magically change into the Bun implementation instead of the GNU coreutils implementation unexpectedly? I'm sure it is/will be documented...
Also, it's probably obvious but you likely would not want to surprise-replace a Bourne-compatible shell like ZShell with this in most contexts. This only makes sense in the JS ecosystem because there is already a location where you have to write commands that are going to be compatible with all of these shells anyways, so just standardizing on some more-useful subset of Bourne-compatible shell is mostly an upgrade, since that'll be a lot more uniform and your new subset is still going to be nearly 100% compatible with anything that worked across most platforms before, except it will work across all of the platforms as-intended. (And having the nifty ability to use it inside of JS scripts in an ergonomic way is a plus too, although plenty of JS libraries do similar things, so that's not too new.)