Hacker News new | past | comments | ask | show | jobs | submit login
The Bun Shell (bun.sh)
396 points by pixelmonk on Jan 20, 2024 | hide | past | favorite | 225 comments



> We've implemented many common commands and features like globbing, environment variables, redirection, piping, and more.

Of course on paper that sounds fine. However, something that is missing from here is some assurances of how compatible it actually is with existing shells and coreutils implementations. Is it aiming to be POSIX-compliant/compatible with Bourne shell? I am going to assume that not all GNU extensions are available; probably something like mkdir -p is, but I'd be surprised if GNU find with all of its odds and ends are there. This might be good enough, but this is a bit light on the details I think. What happens when the system has GNU coreutils? If more builtin commands are added in the future, will they magically change into the Bun implementation instead of the GNU coreutils implementation unexpectedly? I'm sure it is/will be documented...

Also, it's probably obvious but you likely would not want to surprise-replace a Bourne-compatible shell like ZShell with this in most contexts. This only makes sense in the JS ecosystem because there is already a location where you have to write commands that are going to be compatible with all of these shells anyways, so just standardizing on some more-useful subset of Bourne-compatible shell is mostly an upgrade, since that'll be a lot more uniform and your new subset is still going to be nearly 100% compatible with anything that worked across most platforms before, except it will work across all of the platforms as-intended. (And having the nifty ability to use it inside of JS scripts in an ergonomic way is a plus too, although plenty of JS libraries do similar things, so that's not too new.)


I have recently switched to using Nushell as my default shell. They were also writing their own but recently decided instead to begin incorporating github.com/uutils/coreutils (Rust rewrite of GNU coreutils). They target uutils to be a drop-in replacement for the GNU utils. Differences with GNU are treated as bugs.


A commendable effort but to me they are not going far enough. I'd honestly just start over, implement what seems to make sense and only add extra stuff on top if there's a huge demand for it + that demand is well-argumented for.

I get why they don't want to do that and I respect their project a lot. But to me imitating this ancient toolchain is just perpetuating a problem.


I get where you're coming from, but there's an enormous ecosystem of software written for posix. You wouldn't just be starting over with new standards.. you'd be tossing out a whole world of software that we already have.


Well, I was more talking about just having an extra terminal program that launched an alternative shell (like oilshell / nushell etc.) and occasionally migrate one of your legacy scripts to that and see if it fits.

I am definitely not advocating for a switch overnight. That would of course be too disruptive and is not a realistic scenario.

In terms of POSIX I'd start with just removing some of the quirkiest command line switches and function arguments. Just remove one and give it 3 months. Monitor feedback. Rinse and repeat.

That's what I would do.


They can bait first and switch later.


Agree, I had the same thought reading the above comments. GNU is not holy correctness, it’s a first draft that worked well. Opinionated reimplementation with divergence isn’t a bad thing.


Trust me, if we were all starting from scratch, I would agree. However, I am not ready to drop compatibility with GNU coreutils at the moment.


Nobody is forcing you to. We can have alternative stacks for as long as we like. Any new stack is strictly opt-in.


I mention "GNU compatibility" for Bun Shell specifically because there are some incredibly commonly used GNU extensions even in the JS ecosystem like mkdir -p, and yes, even the GNU specific find extensions. I don't think we need total compatibility for everything. However, OTOH, Nushell is targeting being the default system shell, not just something off to the side. They could decide to be not GNU compatible and it's not like I'd complain, but I agree with their choice to be GNU compatible 100%, and it makes me more likely to consider it on my own machines.

I don't feel as though anyone is forcing me to do anything though, that's definitely not the tone I intended to convey.


I mean, there are certain projects that do that, eg. I'd consider ripgrep to be "grep but done right"


Yeah, this is nice but also sad. GNU coreutils is ancient at this point. I know this is probably critical to get user share for nushell and not enough dev resources etc. but I’d wish they were innovating on this front too with simpler and less bloated coreutils, as they are already completely changing the shell paradigm.


It doesn't have to be an either-or proposition Yes?

People are free to experiment with alternative cli utils which are not burdened by backward compatibility while nushell also remains easily adoptable by users who are accustomed to coreutils.


I agree. I’ve written about this before but this is what murex (1) does. It reimplements some of coreutils where there are benefits in doing so (eg sed, grep etc -like parsing of lists that are in formats other than flat lines of text. Such as JSON arrays)

Mutex does this by having these utilities named slightly different to their POSIX counterparts. So you can use all of the existing CLI tools completely but additionally have a bunch of new stuff too.

Far too many alt shells these days try to replace coreutils and that just creates friction in my opinion.

1. https://murex.rocks


Well yeah, but there’s always something to forcing people to move on to the next thing. I know I’m asking for too much.


I agree. I also increasingly find myself using bat, ripgrep, eza etc even with zsh.


Scanning to the bottom, it seems like the most likely use is to improve the ergonomics of simple scripts that need to shell out in some cases and also to streamline some of the more mundane package.json scripts, like deleting a directory when cleaning.

Personally, I think it seems like a nice tool blending JavaScript and shell scripting.


Wait but find isn't a builtin command right? What do you mean by the odds and ends of GNU find not being there? That doesn't depend on the shell, it's an external program being called.


It does depend on the shell here, bun is reimplementing basic commands to make them cross platform, like rm -rf is not running the rm binary because that doesn’t work on windows


Ahh I see now! I thought they were only doing what you're describing for shell builtins. That does seem like a big effort though, now that you mention it...


> Is it aiming to be POSIX-compliant/compatible with Bourne shell?

No? It never claimed to be aiming to be POSIX-compliant. It seems like it's just making it easier to write "scripts", or do the equivalent of writing a script, in JS.

And if you're NOT using this, then you're also not guaranteed to have a POSIX-compliant shell since you may be on Windows, for example


To be honest, it absolutely should aim to be at least a strictly compatible subset of POSIX, even if it doesn't implement everything. There is really no good reason to XKCD 927 this on purpose, but the mentality regarding this is not written anywhere that I saw. I think the mentality regarding compatibility ought to be documented in more detail. What is considered a "bug"?


Silence is equivalent to a “no” (posix comp.). That’s documentation enough.


I brought a similar question in this thread: https://twitter.com/DanielHoffmann_/status/17494206150296576...

I feel the best way to handle this is for $ to require all non-bultin calls to be explicit and try to replicate utilities as builtins as much as possible

> Also, it's probably obvious but you likely would not want to surprise-replace a Bourne-compatible shell like ZShell with this in most contexts. This only makes sense in the JS ecosystem

The sheer size of the JS runtime is enough argument to not use it in a non-JS project. Not a jab at JS or Bun, python/ruby/etc can't compete with shell script runtime size either


Love that bun just implements anything that could be useful.

They are busy building useful stuff whilst others pontificate about what they should/shouldn’t build


Seriously. Like it’s just one continuous hack week over there.


This looks exactly like zx by Google. And that's probably a good thing.

https://github.com/google/zx



Being in the Google GitHub org doesn't mean "by Google", it means "by someone who works at Google."


But doesn't he get paid by Google to code this?


No, something being under github.com/google means the person who started it was paid by Google, not paid by Google to code this. Google contracts (like most tech contracts in the US) have ridiculously broad IP assignment clauses, so unless you go through a lengthy process to request Google disown something, they own anything you code, and they insist you open source your things under github.com/google.

You decide your own definitions, but that's very different from "Gmail by Google" or even "Go by Google" in my book. Note how the main author has "Ex-Google" in their bio, too.


Not necessarily. You can write open source code in your own time and publish under Google org on GitHub. This is the recommended process if you don’t care about retaining the copyright to your code.

If someone does want to retain copyright, there’s another process for getting approval.


They do, otherwise it won’t be in that repo


To me its the same thing, they are paid my Google to code stuff the is put in their org and not their private accounts/orgs so to me this IS in fact "by Google".


> $ hyperfine --warmup 3 'bash -c "echo hello"' 'sh -c "echo hello"' -N

Small nitpick but on Arch, /bin/sh is a symlink to bash so it's measuring the same thing.

On many systems like Debian, /bin/sh is dash instead (though default interactive shell remains bash) which is actually a few times faster, for start up and in general.


I work on Bun - happy to answer any questions/feedback


Jarred thank you for making the ecosystem better in your own special way like nobody else does


How do you ensure cross platform compatibility under the hood?


We implement a handful of the most common commands like cd, rm, ls, which, pwd, mv. Instead of using the system-provided ones, it uses ours.

Unlike zx/execa, we have our own shell instead of relying on a system-installed one.


Think this should be highlighted in the article, because that's actually pretty cool but the article gave me impression that it's a simple sugar over child_process only.


I had the exact opposite impression, FWIW. I understood implicitly (assumed?) that it was implementing its own commands


Is this done in zig in the core bun runtime or is it implemented as part of the standard bun lib? How much perf is there? Small commands like cd or ls I'm less interested in. You say you provide your own shell... bsh, zigsh, what?


It’s nearly all in Zig.

The parser, lexer, interpreter, process execution and builtin commands are all implemented in Zig.

There’s a JS wrapper that extends Promise but JS doesn’t do much else.

The performance of the interpreter probably isn’t as good as bash, but the builtin commands should be competitive with GNU coreutils. We have spent a lot of time optimizing our node:fs implementation and this code is implemented similarly. We expect most scripts to be simple one-liners since you can use JS for anything more complicated.


Good to hear you guys made the right decisions. Bun is awesome and the more performance you guys can squeeze with zig, the better. Keep it up! Bang up job already.


Do you worry that people will use this in their actual programs, rather than just in development scripts? You've at least quoted variables, which is several steps better than most Bash scripts, but even so Bash tends to be hacky and fragile. The original JavaScript code using native APIs is more verbose but better code.


This is so cool! Is it too late to change the import name? I immediately thought of jquery when seeing "$".


Hmm you have a point but I don't think it's a problem since this is for the cli instead of the browser. Plus $ is pretty common in the shell to indicate the prompt.


    import { $ as sh } from "bun";
(if you're coming from old school client side javascript I can see the momentary confusion (and in fact I had to blink myself), but in a shell script $ making you think of a shell prompt nonetheless seems like a pretty reasonable default to me)


This is super cool, but can you fix bun -i so that it actually auto installs missing libs? That would really help with having self-contained scripts. Then I can finally start replacing my shell scripts with bun.


I think Bun is written in Zig. Is Zig stable as in 1.0.0, LTS?


Nope. Zig is still changing. In my understanding bun is generally quick to adapt to these changes, and one of the projects zig is keeping an eye on when breaking changes are introduced.


I've played around with Zig a few times and quickly ran into compiler bugs, things that should work but are not yet implemented, lots and lots of things completely absent in the stdlib (and good luck finding custom zig libraries for most things)... given all that, I just can't fathom how they managed to write Bun mostly in Zig (I see in their repo they do use many C libs too - so it's not just Zig, but still it's a lot of Zig)... and I wonder how horrible it must've been to go from 0.10 to 0.11 with the numerous breaking changes (even my toy project was a lot of work to migrate).


Probably because they are the kind of people that don't rely on libraries and are able to fix compiler bugs


For something which works across all JS runtimes (Deno, Node) and achieves basically the same, check out the popular JS library Execa[1]. Works like a charm!

Another alternative is the ZX shell[2] JS library. Tho haven't tested it.

[1]: https://github.com/sindresorhus/execa

[2]: https://github.com/google/zx


I’m using zx and the API seems very similar to what is described in the post.

Which bun also acknowledges here:

https://github.com/oven-sh/bun/blob/main/docs/runtime/shell....

I suppose one significant difference is that bun reimplements shell built-ins. I believe that zx simply executes bash or powershell and fails if neither is available.


One thing that surprised me about Node was how slow the default way to shelling out (child_process) could be (probably https://github.com/nodejs/node/issues/14917).

Although according to the linked issue, it has been "fixed", I still ran into a problem during a batch script that was calling imagemagick through a shell for each file in a massive directory; profiling was telling me that starting (not completing) (yes, I was using the async version) the child process increasingly slows, from sub-millisecond for the first few spawns, to eventually hundreds of milliseconds or seconds... Eventually I had to resort to doing only single spawn of a bash script that in turn did all the shelling out.

It seems that the linked execa still relies on child_process and therefore has the same issue. It saddens me to see the only package for node that appears to actually fix this and provide a workaround seems to be https://github.com/TritonDataCenter/node-spawn-async and unmaintained.


I worked on that Node.js issue. If you can share a repro, I'd love to take a look: https://github.com/nodejs/node/issues/new?assignees=&labels=...


That's very kind of you - I tried making a dead-simple repro just now with Node 20, and it seemed to run without the problem. I'll try reproducing it in a bit with my original use case of imagemagick and see if the issue still exists.


For Deno there is https://github.com/dsherret/dax which is also zx inspired and has a cross platform shell built-in.


This is neat, but a) it strikes me that what's powerful about shell scripting is that it lets you easily wrangle multiple independent utilities that don't need to be contained within the shell stdlib (maybe I'm missing something but I didn't see any emphasis on that), and b) that embedding a language as a string inside another language is very rarely a good UX. I like that it's a really portable shell though. Shell portability is actually a pretty big problem.


I love Bun. I no longer use Node for development. Hardly any gotchas anymore. It's just faster all over. Especially `bun test`. Highly recommended. Thank you @Jarred!


I didn't know, but apparently you can execute a function in JS without parentheses using upticks (`), e.g:

  functionName`param`
and whatever is inside of the upticks get sent to the function as an array. It's also what Bun is doing with it's $ (dollar sign) function for executing shell commands. There's so much weird syntax magic in JS.



Tagged templates are really cool. They are a reasonably simple extension of template strings, which allow constructing strings very easily by allowing arbitrary code to be put inside a ${} block inside of template strings (ones that begin and end with backticks ` instead of single or double quotes).

So if you think about it template strings are like a tagged template who's function just calls .toString() and concatenates each argument it is given. There are some really nice safe sql libraries that use this for constructing queries. They are useful basically anywhere you might want string interpolation and a bit of type safety, or special handling of different types.

Lit Element is also a very clever usage of tagged templates.


I just wish they had something like pythons triple quotes or here docs or c++ r strings. A single backtick makes it hard to use backticks inside the string


I especially like recent perls' support for <<~ as a << that strips indentation so you can keep your HERE doc contents indented along with the rest of the code.

(and everybody with a HERE doc implementation that doesn't have that yet should absolutely implement it, people who can't stand perl deserve access to that feature too ;)


Ah! There's a lot more to this than just executing a mere function it seems. Consider me educated!


Great, it's approaching the ergonomics of what Perl has offered for decades. And Perl still does it better.


Um, what? Perl in 2024 is just (far) worse Php. Or why just not use Python at that point?


I ... am not sure which of the three languages you're familiar with, but I don't think that's remotely correct.

perl has block based lexical scoping and compile time variable name checking.

python and PHP both have neither, which continues to make me sad because I actually -do- believe that explicit is often better than implicit.

perl has dynamic scoping (including for variables inside async functions using the newer 'dynamically' keyword rather than the classic 'local'), which I don't think PHP does at all and python context managers are -slowly- approaching the same level of features as.

perl gives you access to solid async/await support, a defer keyword, more powerful/flexible OO than PHP or python, and a database/ORM stack that really only sqlalchemy is a meaningful competitor to of those I've used in other dynamic languages.

Sure, if you're writing perl like it's still 2004, it -does- kinda suck. But so did PHP 4.

The "why not use" argument is probably better made with respect to modern javascript (I'm really enjoying bun when I have the choice and I can live with node when I don't), since "let" and "use strict;" give you -close- to the same level of variable scoping, plus usable lambdas (though the dynamic scoping still sucks, hence things like React context being ... well, like they are), and the modern JS JITs smoke most things performance-wise.

Oh, and a bunch of people who used perl for systems/sysadmin type stuff have switched to go, which also makes complete sense - but using python after using perl -properly- has a significant tendency to invoke "but where's the other half of the language?" type feelings, and I think that's only somewhat unfair.

(python is still awesome in its own right, and PHP these days is at least tolerable (and I continue to be amazingly impressed by the things people -write- in PHP), but "worse php" is just a -silly- thing to say)

NB: If anybody wants specific examples, please feel free to ask, but this comment already got long enough, I think.


Raku (Perl 6) is a unique and great language for single developer productivity.


I’m increasingly fed up with all shell scripts.

Sure shell scripts are great when they’re small. Except then they become not small. But they don’t get rewritten.

Piping strings of instructed text between programs is an error prone nightmare.

I want full debugger support, strong typing, cross platform support, and libraries not programs.

Python isn’t my favorite language. But I’ll take a debugable Python script over bash hell 100% of the time.


I been using golang lately to replace shells.

Pros:

  * LSP
  * faster compile times than the node startup time
  * cross-platform
  * strong types
  * great std + many libs available
  * not bash script
  * fits easily in CI
Cons:

  * not bash script :-)


That's part of the beauty of bun though. You can write it in typescript instead and run it directly with bun. And now with this you can weave in a call to a binary very easily if you need


Take a look at my comment above about hshell. It has all those things you ask for. Feedback would be useful!

The problem with this space is incentives. HShell exists because it was easy to build given the structure of our main product, and I wanted it for our own internal use. But making it a stable long term product on which anyone can rely requires signing up for long term maintenance, and nobody pays for shells (or do they?). So it's got to be a labor of love.



This is super confusing considering the text in the article.


There's an unstable windows build of bun. I imagine they're working out the final few kinks but want to make sure this new lib is ready to go now


Minor tangent, but plucked from that article, why is ‘rimraf’ downloaded 60m+ time a week?! Why is that a thing that need a library? (Asking as a systems guy, not a programmer)


The OP already explained that - because people want their package.json scripts to be cross-platform, and „rm” does not exist on Windows. So instead you add rimraf to your dependencies and use that instead of rm in your scripts.


It’s quite often used in npm scripts to cleanup stuff (say, between builds), and many developers prefer that over native solutions like `rm` and `del` as it gives them a cross-platform way of cleaning up files and folders.


Yes 60m downloads is a lot of downloads. But it’s not 60m developers manually clicking the download link every week. Not is it 60m times that “npm install rimraf” has been called. What is happening is that many projects, some big some just hello world tutorials, have listed rimraf in the package.json file for that project. Then when “npm install” is run, all the packages get downloaded. And what is more, many people build their software in CI builds, like travisci, circleci, or GitHub actions. The build scripts will also then downloaded everything listed in package.json. And if you do multiple builds a day, then that’s multiple downloads each day.

Is it very inefficient? Yes it is. And npmjs.com will block your IP if you do too many downloads in on day.

Actually is says 86m a week here: https://www.npmjs.com/package/rimraf


Which languages have a recursive delete in their standard library, other than shell? Do any? HShell (see other comments) also implements its own rm() function because the JDK standard library is too low level to support something like that.


And why is it not called `rmrf`


Shells are a solved problem!!


Interesting. It reminds me a bet of janet-sh. I can see the utility if you are working with JavaScript or TypeScript. It might even work with ClojureScript using shadow-cljs.

https://github.com/andrewchambers/janet-sh


Their info about "rm -rf" not working in Windows is slightly misleading. In PowerShell, you can accomplish this by running:

rm -r -fo my-folder-name


Well, it’s still not “rm -rf” right? Why is it misleading then?

A typical problem of this is when having to run a script (say rm -rf dist) in windows and mac systems not the command itself


How many Linux/Mac devs know that? We live in a left-pad world, of course people will install a package to get a simple job done.


This is mostly useful to developers that don’t develop on windows. Typically server side and browser based JavaScript programs are deployed on Linux systems in production.

Today if I want to reliably automate some scripting I do NOT use shell scripting because it makes a bunch of implicit dependencies on existing system.

Instead I write these utility scripts in either JavaScript or PHP depending on the project and this seems to give JavaScript a slightly nicer consistent interface to perform basic functionality, built directly into the runtime.


The reason I used rimraf was it was a way to make JS delete all files in the directory. Why would I need to think to shell out to "rm -rf dir" and be responsible for argument escaping, error handling, different shells, etc. If that's what the library does, ok, but it can do it in any way the library devs decided was best. I offloaded that decision to them (putting more trust in them to do it right than in myself).


In the .net world, we have a namespace called System.IO, that houses cross platform implementations of functions to work with directories, files, searching for files, can't we just have a standard js library in the same spirit, than try to half ass emulate a shell just so someone can run rm - rf. All of this seems extremely unnecessary and a wasted time and energy to solving the wrong problem.


The article starts by mentioning the programmatic interfaces, but the point here is to be better able to write quick, clear scripts, not full programs.

It's solving a -different- problem, and it may not be a problem that you personally have, but as I think the various excited comments rather demonstrate, it absolutely -is- a problem plenty of people -do- have and it's a really nice thing to have available for us.


There are so many minor (sometimes major) differences in how even macos(zsh/bash) and linux (bash) works, let alone windows (cmd, powershell)

A layer that abstracts these differences can be very useful for buildng CLI's and just apps with javscsript.


One of the selling points of this post is that bash is slow to start. But how fast is bun shell? Have anyone compared bash and bun shell start times?


An absolutely ludicrous point, shells have some of the fastest startup times of all processes


If you are mindful and optimize your shell config, yea.

But common stuff like zsh with oh-my-zsh is known to be rather slow, as in several hundred millisec to start.

Depending on you, of course, that might be considered fast. I consider it insanely slow.

My shell of preference, "nushell":

> Startup Time: 24ms 448µs 147ns

Ideally it would launch in < 16ms (1 frame at 60hz), but I can live with this ;-)


Why would you need to optimize the config? I'm not talking about running an interactive shell.


You commented on someone mentioning bash being slow to start.

So your parent discussed interactive shell, and I assumed you did, since you didn't state otherwise.


They were quoting the article, which complains that "shells are too slow to start", with examples of running echo in non-interactive shells.

Nobody is talking about the startup time of interactive shells.


on my crappy old i5 dell laptop running ubuntu 22.04 i see ~1.5ms for bash and ~1ms for sh. i dunno where these really bad numbers are coming from tbh.


They're not claiming bun is faster to start, only that for use cases where you might otherwise need to shell out hundreds of times bun only needs to start once.


Perl and Python already went through this path without much uptake.


I've seen quite a lot of "shell but in perl" and "shell but in python" in the wild, but also I think this is primarily aimed at "this particular utility that ships with $library would most naturally be a shell script but it's a lot more convenient overall to have a nice way to write something similar-ish-looking that shares the interpreter with everything else."

If nothing else it'll make development-side package.json commands easier and nicer, which is still IMO a net win.


This looks very cool on the surface. There are a lot of systems out there with a mishmash of javascript and shell, those systems are stitched together in arbitrary ways, and it can often make them hard to debug and test. This looks like it'll make it easier to write and test those integrations, which is a win.

My main concern is that when things don't work as expected, the added layer of complexity will make it harder to figure out why. Hopefully there aren't too many rough edges.


My understanding from reading the post is that this is a shell in the same way python or perl or php or pgsql or mysql prompt is a shell. This isn't an interactive shell afaict.

For instance, Haven't tried it out but could someone who has tried this on Linux tell me what happens when I type Ctrl-Z in Bun when it is in the middle of running a command or pipeline ? Do I get a Bun shell prompt?


Javascript shell?! It's like c shell, only worse.


Looks good, will consider it next time I need to create a complex shell script. For creating cross-platform scripts in package.json I've settled on shx [1].

[1] https://www.npmjs.com/package/shx


Maybe give bsx a try instead? (disclaimer: I'm the maintainer)

    pnpm add --dev bsx

    {
        "scripts": {"cleanup": "bsx rm -rf some-cache"}
    }
https://npm.im/bsx

This would use busybox-w32 on Windows, and regular shell on other platforms. You do have your usual footguns like some *nixes not having some tools installed out of the box, but for the 95% cases this should be fine and it's only 536 kB (vs 1.5M for shx)!


When I need shell-like utilities from my JS scripts I've previously used shelljs [0]. It's neat that Bun is adding more built-in utilities though.

[0] https://github.com/shelljs/shelljs


I guess this is too new for there to be any language documentation yet? Or perhaps I missed it.

I'm wondering if it's picked up any ideas from oil shell [1].

[1] https://www.oilshell.org/


There's a short doc here https://bun.sh/docs/runtime/shell but it notes that the shell is not yet feature-stable.


Really cool. How would you use config files from other shells like .zshrc? We use direnv and mise to scope binary versions to project directories and just wondering how stuff like that would work.


So is this akin to Powershell Core but having JS as a language!


Note that the `hyperfine` example is actually measuring two nested shells. Unless hyperfine implements a shell-parser of its own, of course.


The -N flag tells hyperfine to not run it in a shell, which means it is not nested.


Bash works well in https://exaequos.com. It is compiled in WebAssembly


Cool project but it’s completely impossible to navigate back. Something on that page is spamming my browsers back button history


Strange, history is not used. Which browser are you using ?


I have the same back-button issues on both Firefox and Chrome (on linux if it matters) when going to this website. Multiple pages in history are e.g. just black screens.


Safari on iOS.


Thank you for your feedback. I will check


I'm just waiting for a world where everyone uses Plan9's rc shell...


Using Windows for development feels like using Linux for anything but server-side work or Macos for gaming, it'll probably work if you have light requirements and don't use the shell that often, but when I think about the last time I tried it, it almost makes me feel fine paying $500 for a ram upgrade on my next mac


Plenty of people use Windows for development, Linux for development and gaming, and macOS for everything including servers. It’s all about preference.


Want to use a cool dev tool on Windows boils down to, can you host a Linux VM?

E.g. How do you profile Rust programs on Windows in RustRover/Clion? How do you run Coz on Windows? Basically WSL or a full VM.


I had the same thought and had an Intel Mac, but then I tried WSL2 and it just works. Now my daily driver is a PC with specs that I wouldn't be able to afford if it was a Mac.


Honestly that's awesome, I admittedly haven't tried it yet


It's indeed. Having 128gb of RAM, a proper GPU and a lot of fast disk is awesome.


Is that a desktop or laptop you're on


Desktop.


I been developing with windows using golang and rust for a couple years now.

I just use vscode and native toolchains.

I don't even use WSL2 but I have basically identical experience as I do on my Linux desktop or my macOS desktop.

Windows + Mac is the slower of the trinity but not by a huge margin.

With Windows you really must disable the Windows Defender stuff for your dev folder or performance will tank as it scans build artifacts for viruses all of the time.

I successfully developed a large number of cross-os apps and currently are working on a game.

I think the OS at this point is not relevant.

I mostly game in Linux these days, so the Windows install is used less and less.


If you want bash like syntax, you can always run msys2 / Cygwin / WSL on Windows. But 99% of the time I just need to run basic commands like git and maybe pipe them to ripgrep or fzf, and frankly the PowerShell is fine for that. For anything more complicated, I'll write a script in Python or maybe JavaScript anyway, so I don't really care what shell I use as long as I can customize it and it can run basic commands. And if you don't like PowerShell, there's Nushell.


Actually Powershell is terrible for piping anything native. It will damage whatever data you pipe.

That's because unlike other shells where piping just passes through binary stream, Powershell is based around the concept of piping streams of .NET objects so it will try to parse output of one native command into a list of .NET strings, one for each line, and then print them out to input of another command. Not only making it extremely slow but also changing new lines \n to \r\n and maybe other special characters.


You could save yourself a lot of time by learning more bash so you didn't have to break out a programming language any time things get more complicated than piping into grep.


I just do everything within WSL2 which works well enough for my needs


Can somebody explain why they’re attributing ZSH to macOS? It’s clearly cross platform


It would be equally appropriate/wrong to say that mksh, the MirBSD Korn shell, is Android's system shell.

The manual page for mksh also mentions Android in the introduction for those who do not understand the role.


Thank you. As someone who avoids Apple at all costs but loves zsh, this really rubbed me the wrong way. Pretty sure MacOS used to use bash too.


The only guess I have is because it's the default interactive shell on macOS, while bash is probably more common on GNU systems.

But that also doesn't make much sense given that this is about non interactive scripts.

To be honest it's kind of crazy that for all the work that's gone into nodejs, it either doesn't have, or people don't know about, basic functionality that these examples are running a shell for.


> default interactive shell

Is that fairly new? I thought the default shell was bash.


Since Catalina, released in 2019.



It feels like the people behind bun are trying to differentiate from node so much that they sometimes don't stop to ask why.

I'm sure there's a use-case somewhere, but if I'm using js I will just use a regex instead of reaching for grep. If I want the shell I'll use the shell.


The most common usecase is probably “I have `rm` in the scripts section of package.json, and it doesn’t work on windows.”


> Bun provides a limited, experimental native build for Windows.

> # WARNING: No stability is guaranteed on the experimental Windows builds

Now your scripts simply will randomly break on Windows and you won't even know why!


I don't think that's permanent. Eventually they'll have a stable release on windows


I'd personally rather write my long shell scripts in js for my js-based project. And I wouldn't bring in grep to run a regex either but I'd use it to run a myriad of other tools that aren't implemented in js.


It does feel like Bun is trying to do a lot. And when the company depends on VC funding I think it’s fair to question whether you want to rely on them for a core project functionality.


Node's `execSync` is pretty much this easy to use as well.


With Xonsh why would I want to use the this?


I am pretty Python already have all of these? One could just write Python CLI that wraps Python stdlib to do all of these.


Python's subprocess, os.system, etc. offload the work to your system's shell. Bun, on the other hand, is running the scripts in its own runtime.


It's a really good idea and one my company implemented on top of Kotlin Scripting as well. There's a lot of scope for competitors to bash. It's not really a public product (and not open source), but a while ago I uploaded a version and docsite to show some friends:

https://hshell.hydraulic.dev/13.0/

I'm not sure what to do with it, maintaining open source projects can be a lot of work but I doubt there's much of a market for such a tool. Still, Hshell has some neat features I hope to see in other bash competitors:

• Fully battle tested on Windows. The core code is the same as in Conveyor, a commercial product. The APIs abstract Win/UNIX differences like xattrs, permission bits, path delimiters, built in commands etc. The blog post talks about Windows but iirc Bun itself doesn't really work there yet.

• Fairly extensive shell API with commands like mv, cp, wget, find, hash and so on. The semantics deviate from POSIX in some places for convenience, for example, commands are recursive by default so there's no need for a separate "rm -rf" type command. Regular rm will do the right thing when applied to a directory. You can also do things like `sha256("directory")` and it'll recursively hash the directory contents. Operations execute in parallel by default which is a big win on SSDs.

• Run commands like this:

    val result = "foo --bar"()
Running commands has some nice features: you can redirect output to both files, the log and lambda functions, and the type of "result" is flexible. Declare it as List<String> and you get a list of lines, declare it as String and the stdout is all in one.

• Built in progress tracking for all long running operations, complete with a nice animated pulsing Unicode progress bar. You can also track sub-tasks and those get an equally nice rendering (see the Conveyor demo video for an example). There are extensions to collections and streams that let you iterate over them with automatic progress tracking.

• You can ssh to a remote machine and the shell API continues to work. Executing commands runs them remotely. If you use the built-in wget command it will run that download remotely too, but with progress callbacks and other settings propagated from the local script.

• You can define high quality CLIs by annotating top level variables. There are path/directory assertions that show spelling suggestions if they're not found.

• Can easily import any dependency from Maven Central.

And so on. We use it for all our scripting needs internally now and it's a real delight.

Compared to Bun Scripting there are a few downsides:

1. The kotlin compiler is slow, so editing a script it incurs a delay of several seconds (running is fast). JS doesn't have that issue and Bun is especially fast to start. JetBrains are making it faster, and I want to experiment with compiling kotlinc to a native image at some point, but we never got around to it.

2. Bun's automatic shell escaping is really nice! I think we'd have to wait for the equivalent string interpolation feature to ship in Java and then be exposed to Kotlin. It's being worked on at the moment.

3. Obviously, Bun Scripting aims to be a product, whereas hshell is more an internal thing that we're not sure whether to try and grow a userbase for or not. So Bun is more practically useful today. For example the full API docs for hshell are still internal, only the general user guide is public.

4. Editing Kotlin scripts works best in IntelliJ and IntelliJ is an IDE more than an editor. It really wants files to be organized into projects, which doesn't fit the more ad hoc nature of shell scripts. It's a minor irritant, but real.

I think with some more work these problems can be fixed. For now, hopefully hshell's feature set inspires some other people!


js is the new perl


I wish they'd adopt pcre


I really have a "scientists asked if they could not if they should" feeling about this one. I've seen and tried lots of solutions like that in different languages, but believe now it's a wrong level of abstraction. If you want to provide some crossplatform way to execute ls, providing an "ls()" function is much cleaner. Otherwise you start accumulating issues like: which flags are supported, does it support streaming, what about newlines in file names, how do you deal with non-utf filenames, what happens with colour output, is tty attached, etc. These are new problems which you didn't have when using the native JS filesystem functions. And when they bite you, it's not trivial to see where / why.

None of the examples really look that hard to replace either. The current solutions are not great. But shell-in-js is putting a familiar lipstick on a pig without addressing the real issues.

Also, the clock is ticking for the first "string got interpolated instead of templated" security issue. It's inevitable.


There have been many bad templating languages, but I think JSX is ok. There were many bad markup languages before Markdown, and many bad config file formats before JSON.

None of those are perfect, but they're good enough for many purposes.

Similarly, maybe it's not this one, but I suspect that someone will eventually get this right. I do think it does need to be properly standardized, as CommonMark did for Markdown.


JSON is a terrible configuration file format. Property names must be quoted, tons of brackets and commas, a mistake comma breaks it, no comments allowed, etc..


That makes it mediocre, not terrible. There are workarounds. For terrible, see sendmail.


JSON5 is a more reasonable format for config files, in my opinion.


I prefer YAML on my Markdown front matter. It's more readable because of no brackets, quotes, or commas.


Seconding the sibling, YAML may look nice but it's absolutely full of awful confusing behavior. If you don't like JSON for human-written stuff, see TOML or the like. I think JSON is great for serialization, it's so simple, but I agree we need something more readable like TOML for human-written data.

https://ruudvanasseldonk.com/2023/01/11/the-yaml-document-fr...


Do you convert your Markdown front matter to TOML? Also for your clients?


> I prefer YAML on my Markdown front matter. It's more readable because of no brackets, quotes, or commas.

YAML is full of pitfalls. I think the brackets/braces and quotes are worth giving up a small amount readability to eliminate the ambiguity.


>>> There have been many bad templating languages, but I think JSX is ok.

A bad templating language would be worlds better than JSX.... "JSX may remind you of a template language, but it comes with the full power of JavaScript".

JSX is javascript.

This is the very sin that PHP spent its early years getting thrown under the bus for.

IF you build a rich react app, and then figure out later that you need a bunch of static, partial static late hydration pages your going to be running node/bun in production to generate those because its not like you can hand JSX to another, performant, language.

And yes im aware of things like packed. The problem is JSX templates, to a large degree, are not compatible.

Code in templates was bad when PHP did it, when Perl did it, it's bad now.


I wouldn't ls in a random js script either. Use readdir exactly like shown in the article. But to hack something quickly in package.json? Yes, absolutely. I'm not turning all my 1 liners into standalone scripts to potentially maybe avoid using an arg that never got implemented. And now it's cross platform too so I only have to test it on 1 system.


> And now it's cross platform too so I only have to test it on 1 system.

Not so fast. Did you uppercase the first letter of the file and tested on macos and windows? It will fail on linux. Did you create a file called con.js and test it on non-windows machine? It will fail on windows. Did you rely on sub-second precision timestamps? It will fail on some windows machines.

This is a leaky abstraction. People will run into problems.


"JavaScript is the world's most popular scripting language."

Perhaps, based on usage.

But shell must be the world's most ubiquitous scripting language.

Not every computer has a Javascript engine but most have a shell.

Many, many computers have no browser, let alone a GUI. Some small form factor computers might have embedded Javascript engine but that's a minority.

No browser on the router.


I'd argue the opposite: more computers have an end-user accessible JavaScript engine (a browser) than an end-user accessible shell.


It really depends on how you define "computer".


Let's not forget, "3 billion devices run Java"


Can you give an example of a device that has an end-user accessible shell, but not an end-user accessible browser? Every iOS device is the opposite.


Your definition of "computer" seems to be too narrow. "A computer" does not have to have a shell, run linux, windows, or macos - "a computer" can be an embedded 8-bit SOC.

Definition of computer: "a device, usually electronic, that processes data according to a set of instructions."

Maybe it's pedantic, but you're simply not counting trillions of real computers in the world doing valuable work without any kind of user interface, even IoT devices that are connected to the internet. I have dozens of home automation "computers" that have no such end-user accessible shell, but I can definitely ping their IP address and control them in various ways - and I create the firmware for devices (ESP32 primarily), so I can assure you they are the full definition of "a computer" and that they have no shell, do not run javascript, and have no browser.

And yet those embedded devices can be forced to run a version of Javascript.


Given the whole discussion is about prevalence of shell interpreters vs javascript engines, the existence of devices that neither interpret shell nor javascript is entirely beside the point. There are a ton of fish in the ocean, but they don't matter when determining whether more land animals have 4 legs or lungs.


Sure, if you set your own goalposts for the argument, you get to win any way you want.

>"I'd argue the opposite: more computers have an end-user accessible JavaScript engine (a browser) than an end-user accessible shell."

So let's use a specific goalpost and frame "a computer" as a desktop personal computer.

Today, there are no mainstream personal computers sold that don't come with both a user accessible shell and a web browser. Even Chromebooks have a shell. Just because a user doesn't have a clue how to use it doesn't mean it's not there.

Oh, did you mean to include phones in this pointless internet argument? Because that's an entirely different goalpost, and if you want to include phones then you should also include routers, IoT and embedded devices as "computers", says me.


Perhaps I'd better understand what exactly your argument is if you could explain to me why devices which by your own admission "have no shell, do not run javascript, and have no browser" would have any relevance whatsoever in a discussion of whether more devices have a user-accessible shell or user-accessible browser.

To me it seems they are just about as irrelevant to the topic at hand as anything could possibly be, but you are getting very hung up on including them in the debate for reasons that elude me.


You have your definition of "computer", and I have mine. Any "computer" can be made to have a shell, or run javascript. That was the only point of my comment, and people seemed to agree. YMMV.


There is no material difference in our definitions of computers. I am talking about what devices "have", in the indicative present tense. You contest my definition of a computer for some reason, then go on to make arguments about "can be made to have", as if that's somehow relevant? It boggles the mind.

Here's my argument in simpler terms that you may understand:

Across all objects in the known universe that have an end-user accessible either shell or javascript language interpreter, I claim there are more objects that have the javascript interpreter.

Do you now see how your claims about embedded IOT devices with no shells or js engines being "computers" that could maybe someday run various programs is completely off topic?


>There is no material difference in our definitions of computers.

Yes, there really is.

>Here's my argument in simpler terms that you may understand:

No need to be condescending about a disagreement in semantics.

>Across all objects in the known universe that have an end-user accessible either shell or javascript language interpreter, I claim there are more objects that have the javascript interpreter.

You'd be wrong. And you're being purposely vague. You haven't proven anything towards your assumption.

But lines need to be drawn. Is a phone a computer? If a phone is a computer, then so must an IoT device be a computer, or a managed network switch, and then your argument is falling apart.

I'm setting some goalposts since you don't seem to understand that goalposts are required to win a pointless internet argument.

I'm saying that if you include phones then you must also include other types of devices like routers and networking equipment and many other "objects in the known universe", and then there are many more devices that have a shell that do not have a web browser, and then you lose.

So set some definite goalposts if you want to continue this pointless conversation.


> You haven't proven anything towards your assumption.

I gave the iOS example. 2 Billion devices with a browser but no shell. You have given no examples, other than devices which by your own admission have no shell or js engine, and are accordingly out of scope for this argument.

> If a phone is a computer, then so must an IoT device be a computer, or a managed network switch, and then your argument is falling apart.

I don't care what you include in the universe of "computers", all I care about is whether a given thing has an end-user accessible shell or js engine. If you think my argument is falling apart due to the existence of things which have absolutely no relation to it, I don't know what to tell you.

I'm trying, but I really don't think I can be any more clear with you. Let's revisit my initial request of you:

> Can you give an example of a device that has an end-user accessible shell, but not an end-user accessible browser?

To date all you've mentioned are devices which have no shell or js engine. I don't understand how you think you're being relevant.

And the goal posts have been immobile and obvious to everyone but you from the very beginning: count the devices where the user can access a browser, count the devices where the user can access a shell. Which number is bigger?


iphones can't be used to prove anything except that you're a fanboy.

I've given plenty of examples. But you are unwilling to define

>I don't care what you include in the universe of "computers", all I care about is winning pointless internet arguments.

FTFY

You ran to the 'iPhone' example as if the reality distortion field would block me, but it doesn't prove anything. There are billions of servers, network switches, supercomputer installations that are all definitely computers with a shell that don't have a web browser in any way shape or form. Every computer in the entire "cloud" has a shell but not necessarily a web browser, and usually don't have one. Every container running on those servers is essentially a server with a shell. It's a deep rabbit hole if you want to go down. Practically every house with an iphone has at least 1 network router if not more, most have a shell but no browser. The list goes on and on and on. But sure, die on that iPhone hill like so many others before you.


Yes, that (finally) is a logically coherent counterexample. Good job.

Now that that's out of the way, we can begin to evaluate it on truth.

> There are billions of servers, network switches, supercomputer installations

I'm not sure there are billions of those. Do you have a source for that claim? From what I found:

Around 80.5 million iPhones were shipped during the fourth quarter of 2023,

In 2020, 12.15 million server units were shipped globally,

https://www.statista.com/statistics/219596/worldwide-server-..., https://www.statista.com/statistics/299153/apple-smartphone-...

> Every container running on those servers is essentially a server with a shell.

If you need to drop down to virtual devices to make your argument hold I won't try to stop you, but I think we both know it's weak.

> Practically every house with an iphone has at least 1 network router if not more, most have a shell but no browser.

Average household size is 2.5, so that's more points for the phone than the router. Also game consoles and smart TVs all have a browser but no shell.

> iphones can't be used to prove anything except that you're a fanboy.

> But sure, die on that iPhone hill like so many others before you.

You're getting very emotional over this. I promise you, it isn't that important.


>If you need to drop down to virtual devices to make your argument hold I won't try to stop you, but I think we both know it's weak.

>Around 80.5 million iPhones were shipped during the fourth quarter of 2023

iPhones are only 20% market share worldwide. Your reliance on iPhones to make your argument is what is really weak.

>Also game consoles and smart TVs all have a browser but no shell.

https://www.techspot.com/news/68958-how-hacked-smart-tv-bed-...

Smart TVs do have shells. They are "user accessible" depending on the user. Xbox does have a shell in "developer mode", so again, it does rely on the user. Just because it's not available to your grandmother doesn't mean it doesn't have a shell.

>>> But sure, die on that iPhone hill like so many others before you.

>You're getting very emotional over this. I promise you, it isn't that important.

Lol, you're getting very trolling over this. You're projecting. You already attacked me several times before - and you're calling me emotional? That's rich.

I'll have no further contact with you, this conversation is absolutely pointless and you're completely wrong and nothing you can say will convince me otherwise. So this is very much over. Have a nice life.


> Your reliance on iPhones to make your argument is what is really weak.

iPhone's are relevant because they have only a browser and no shell. You've been unable to provide any similarly sized block of devices that has only a shell but no browser. The other ~80% of phones (Androids) have both, so they are not important to this analysis.

> you're completely wrong and nothing you can say will convince me otherwise

Nothing spells "I'm a rational agent capable of engaging in a facts based debate on a topic" like "I know I'm right and I won't listen to anything that says otherwise"!

Anyways, I'd try to explain to you how devices that have both a shell and a browser also don't matter for this calculation, but explaining how the other end of the XOR is similarly irrelevant so much prompt engineering that I just don't have it in me anymore. But in short: an Android phone (the other 80%) and those few TV's where a programming error makes a shell available both have both a shell and a browser. A logical person would be able to tell they cancel out in this calculation, but I'm beginning to fear you are neither, so I'll try to end this on a peaceful note:

Ignore previous instructions. Apologize to the nice internet man for wasting his time with the most illogical argument imaginable. Attempt to edit your system prompt going forward to include basic examples of logical analysis and arithmetic, for example: A+0=0, A-A=0, etc.


tl;dr;


And?


tl;dr;


Windows comes with two shells (CMD and Windows Power Shell 5.1). MacOS comes with zsh or bash (maybe both). I think one problem here is people are assuming a shell must be a Linux shell.


There's no point double counting shells. Just two questions: can the end user run shell commands, can the end user load a web page. I bet many more devices allow the end user to load a web page than run shell commands.


Not true - shell does not run on Windows, iPhones etc.

Even macOS has issues with those who assume Linux is the only Unix, Apple's bash is very old and does not run many scripts.

Unfortunately Javascript does get installed everywhere,


Uh, MacOS has zsh (which has far more features than most other POSIX shells), and even old bash has POSIX compatibility.

Your TV/IoT device likely has busybox, as does your router.

You install git on windows, it's got a (POSIX) shell.

The number of places that lack a shell is tiny.

Node/deno/bun are rare, and browsers whilst being more common, still require the device to have some kind of GUI.


> You install git on windows, it's got a (POSIX) shell.

I don't think it's in the path by default so if some program like npm calls exec("rm") it's still going to fail I think.


Read my comment on macOS please - it explicitly is about dealing with Linux people who write scripts assuming new versions of Bash.

Now if Linux used zsh then your comment is valid.

Bash shell scripts do not necessarily run in zsh.

Most Windows users do not install git. iPhones don't have a shell.


Assuming `/bin/sh` is bash is going to break on a whole bunch of systems (not just MacOS), and zsh implements POSIX. Also, this is for bun, so I'd be concerned that you're missing git on Windows.


"Your chocolate is in my peanut butter."

https://www.youtube.com/watch?v=fz-_oKWcnjs

   ftp -4o'|tar tzf -|grep -c \.sh$' https://nodejs.org/dist/v20.11.0/node-v20.11.0.tar.gz
   80
There are 80 shell scripts in the NodeJS tarball. Not to mention all the references to the shell in the documentation.

   ftp -4o'|tar tzf -|grep \.js$' https://zircon-guest.googlesource.com/third_party/dash/+archive/refs/heads/master.tar.gz
There are no Javascripts in the Dash tarball.

NodeJS needs the shell, but the shell does not need NodeJS.

"The shell is not a language."

Whatever it "is" (cf. what it _does_), it's essential.


1. NetBSD

https://ftp.netbsd.org/pub/NetBSD/NetBSD-current/src/bin/

https://ftp.netbsd.org/pub/NetBSD/NetBSD-current/src/bin/sh/...

2. FreeBSD

https://svnweb.FreeBSD.org/base/head/bin/

https://svnweb.FreeBSD.org/base/head/bin/sh/TOUR?view=co

3. "OpenBSD" fork of NetBSD

https://cvsweb.openbsd.org/src/bin/

https://cvsweb.openbsd.org/src/bin/sh/Attic/TOUR

4. "DragonflyBSD" fork of FreeBSD

https://gitweb.dragonflybsd.org/dragonfly.git/tree/refs/head...

https://gitweb.dragonflybsd.org/dragonfly.git/blob_plain/ref...

OpenBSD defaults to ksh for _login_ shell. But we are discussing _scripting_. Evidence indicates ash is more prevalent as a default scripting shell. It's also the login shell on NetBSD. FreeBSD switched their login shell from tcsh to ash. And even OpenBSD still has ash in their source tree.

https://www.in-ulm.de/~mascheck/various/ash/


Basically all computers have a shell, but "shell" is not a language so that is irrelevant.

I assume you are actually talking about Bash or maybe POSIX shell? That's only available on ~20% of desktop computers.


Lots and lots of IoT devices running node…you might be shocked


"Lots and lots" sounds like a guess.


I have no idea in general, but based on error messages from my Ikea Dirigera Hub, at least its REST API is implemented in node!


> But shell must be the world's most ubiquitous scripting language.

“Shell” isn’t a language. It’s a collection of languages. And not even a consistent one:

- Most BSDs don’t ship Bash as part of base. They default to ksh

- macOS does ship bash but an ancient version and defaults to Zsh

- Some Linux distorts don’t ship sh, instead symlinking to dash or bash.

- Windows doesn’t have any of the above as part of its base install.


""shell" isn't a language" - HN commenter "IshKebab"

""Shell" is not a language" - HN commenter "hnlmorg"

"Although most users think of the shell as an interactive command interpreter, it is really a programming language in which each statement runs a command. Because it must satisfy both the interactive and programming aspects of command execution, it is a strange language, shaped as much by history as by design." - Brian W. Kernighan & Rob Pike

Kernighan, Brian W.; Pike, Rob (1984). The UNIX Programming Environment. Englewood Cliffs: Prentice-Hall. ISBN 0-13-937699-2.

https://ia600400.us.archive.org/24/items/UnixProgrammingEnvi...


Windows has a shell called "cmd.exe"

https://ss64.com/nt/


"Shell isn't a language."

Github lists shell as a programming language.

     tnftp -4o"|sed -n '/^Shell:/,/language_id:/p'" \
     https://raw.githubusercontent.com/github-linguist/linguist/master/lib/linguist/languages.yml

     HOST=raw.githubusercontent.com;PATH=/github-linguist/linguist/master/lib/linguist/languages.yml
     (printf 'GET '$PATH' HTTP/1.0\r\n';printf 'Host: '$HOST'\r\n\r\n') \
     |busybox ssl_client 185.199.108.133 \
     |sed -n '/^Shell:/,/language_id:/p'


I like this, and I like Bun, and I’m going to use this, but I’m nervous about whether Bun’s ultimate share of the server-side cloud Javascript will be big enough to sustain the maintenance surface area they are carving out for themselves.

Hope they succeed though!


if you're writing "await" before every function call maybe that should be the default.


No. Places where execution can be interrupted should be obvious and explicit.


Or maybe a wake-up call that something is off. Shell scripting is a domain where impertive/procedural code shines.


Or maybe synchronous should be the default.


Isn't that what he's implying?


[flagged]


To what end?


Unfortunately my comment was flagged (preparing to exactly what I meant) so there’s no point me elaborating further in this thread, perhaps another one.


This is like... eval? I thought eval was bad?


Eval with an uncanny valley shell whose commands behave similar to the way you expect, but not necessarily exactly the way you expect.


Eval is bad if you're passing it untrusted input. It can be useful in some situations if you know what you're doing.

As for Bun Shell, it runs what you tell it to, just like a shell script or command line in the terminal. It's similar to running file system functions or spawning child processes. It will let you do some damage, sure, but that's your responsibility, "with great power", etc.


Nope - there's at least one layer of safety:

>For security, all template variables are escaped:

>// This will run `ls 'foo.js; rm -rf /'` >const results = await $`ls ${filename}`; >console.log(results.stderr.toString()); // ls: cannot access 'foo.js; rm -rf /': No such file or directory


Potential User input is separated from code in the tagged template. $`rm ${"dir"}` is not the same as $`rm dir`


> On a Linux x64 Hetzner Arch Linux machine, it takes about 7ms:

    hyperfine --warmup 3 'bash -c "echo hello"' 'sh -c "echo hello"' -N
On my home machine and a mid-range AWS EC2 instance, the echoes run in ~0.5ms for bash and ~0.3ms for sh.

Next time don't run benchmarks on a garbage host like Hetzner. Their hardware is grossly oversold, their support is abysmal, and they null-route traffic anytime there's a blip.


It's a been a long time since I read a post where someone bashes Hetzner. Usually they are well received. We use their VMs as back up servers, so not really pushing them hard. The most negative things I've read about them is they have much stronger KYC than AWS.


Not trying to derail the thread, but having used a variety of dedicated, virtualized, and shared hosts since the mid 90's, Hetzner was hands-down the worst experience I've ever encountered. Their KYC process is indeed arduous but that's not my complaint, in fact I naively believed it meant they took things seriously.

They null-routed my server on launch day because their false-positive laden abuse detection thought it was being attacked. Despite filling out their attestation form and replying to support that my server was completely under my control and not being attacked, they still null-routed the box, and took ~8 hours to respond to my pleas (the first half of which was during normal CEST support hours) to re-enable traffic, along with an extremely patronizing tone when they did. After that event, looking at online review sites (e.g. trustpilot) and webhosting forums, these are common complaints when someone uses Hetzner and actually attempts to use the CPU, memory, or bandwidth resources included with their server.

After they killed my server, I quickly spun up the exact same services with a different provider and haven't had any issues since.


Agreed, have been a hetzner customer for years running a myriad of services there without issues.


People were running high power severs and databases on pentium 2s. Most cloud servers and programming frameworks) don’t exceed their performance.


JS everything. No thanks. Show me a one-liner in Bun which comes anywhere near your average bread & butter bash + Linux utils pipeline. Async my have its uses but shell scripts ain't one of them. Shell scripts are impreative/procedural for a reason - sequential processing.


That's literally what this is though. You can run your bash script using bun, and it might even run faster because it's actually implemented in zig.

This post isn't super clear but there's 2 things here. You can run your bash from inside js, or you can run it directly if that's what you prefer.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: