Hacker News new | past | comments | ask | show | jobs | submit login

I sometimes wonder why we don't see ruby used for shell stuff more often. It inherited most of the good stuff for shell scripting from Perl, and Perl took a lot of it's syntax from sh and sed and awk, so almost anything you can do in shell script you can do in ruby, but with an option of making it gradually less terse and more readable, while having sane variables and data handling from the start.

Also ruby is great in allowing complexity to grow smoothly, no sudden hiccups. You start with just one line (everything goes into module main implicitly), extend it to a single-file script, require some built-in libraries, then add a module or helper class in the same file, and only then maybe extract those files to required files, add gems, whatever. No boilerplate whatsoever, no jumps, no big rewrites.

meanwhile, a lot of tooling nowadays is written in Go, and I have no idea why, it's not friendly for os manipulation at all, and number crunching power is not needed in many, many tasks of that sort.




I think the quality of a language for shell scripting is often secondary. What’s of greater significance is where it is at. I.e., does it have it already installed? The answer with Linux and Bash is almost always “yes”. Not so with ruby.

The moment you start asking the user to install things, you’ve opened up the possibility for writing a program rather than a shell script. The lifecycle of a piece of software is almost always one of growing responsibility. This cycle is devastating when it happens to shell scripts. What was once a simple script slowly becomes creaking mass of untestable, poorly understood code playing in the traffic of swimming environments (which grep you got, buddy?).

I guess I’m saying that once you open up the possibility of writing a program, you generally take that option and are usually happier for it. In the “write a program” world, ruby is still good, but it becomes a far harder question to answer whether ruby is still the right choice. There are a lot of languages with a lot of features engineers like.


This is indeed why I use Perl over Ruby. As long as it's not for a Window machine, a Perl script is deployed by copying it over and that's it.


That's true of Python and Perl as long as you keep using only the features built in in the core language (standard lib or whatever they call it.) The same applies to Ruby.

My scripting language is bash in at least 99% of cases. I used to program in Perl when I need some complex logic. I stopped using it some 10 or 15 years ago when I switched to Ruby for two reasons: I became more familiar with it than with Perl and it's easier to manage data structures whenever I need something complex or classes. That doesn't happen often in scripts but as I wrote, I use bash for all the normal stuff.

I use Python for the scripts that start an HTTP server because it has the http.server module in the standard lib and it's very simple to write handlers for GET, POST and all the other HTTP verbs. The last example was a script to test callbacks from an API. I just implemented two POST and PUT methods that print the request data and return 200 and a {} JSON. I think that to do the same in Ruby I would need to install the webrick gem.


In fact, that's true for Python if you use a zipapp and no c extension: https://docs.python.org/3/library/zipapp.html

You can happily copy the zip of your scripts and all deps in the server.

You still do have to mind your versions, as always with python.


> The same applies to Ruby.

With a big difference -- Perl and Python will always be installed on these machines, whereas Ruby might need two deployment steps: (1) copy file, (2) install Ruby!


Perl is deeply underappreciated and needs a lot more love. One of the keynotes at the polyglot conference that I run is going to be Perl talk and I'm really looking forward to it.


Does it still require global library / module installations for your script's dependencies? If so, hard pass.


It does not, and has not for at least a decade!


So any guides on how to make a self-contained Perl script that needs dependencies?


See https://metacpan.org/pod/pp for one tool to do that.


Even on Windows there's a good chance. The Git for Windows project bundles Perl, but not Ruby.


That was the reason Perl was what I switched too from bash when I was working on Solaris boxes; it was miles ahead of what was possible with bash AND it was already present. If I remember an older version of Python was also installed but by then Perl had already got me reeled in and I felt Python to be too "verbose" compared to Perl (I eventually changed my opinion when I got a bit more experience under my belt).


Interesting! I still find Python too verbose to stand in for shell scripts when Perl is available, with what I think is a decent chunk of experience.


Ha - I actually haven't changed my opinion about verbosity, Python is still more verbose and I will choose Perl for throwaway scripts even today; I just have a greater appreciation of readability of Python code compared to the free-for-all style-fest of Perl code (admittedly written by a bunch of devs with little code style enforcement). Perl is great for smaller scripts but I'm talking about many thousands lines of code and the lack of native object orientation, messy error handling, lack of a decent repl etc start to take their toll.


Thousands of lines := time to use a real (compiled) language


Very true; unfortunately quite often one is not in the decision maker's seat.


One usually needs modules to easily do something more advanced, but yes, Perl is almost always installed. Although I find Ruby much more ergonomic, I still reach for Perl as well because I know it better and don’t have to open the documentation so often.


> I.e., does it have it already installed? The answer with Linux and Bash is almost always “yes”. Not so with ruby.

True, not true for Ruby, but with Golang and Rust you have an almost-no-dependencies final binary so the argument there does not apply.

> which grep you got, buddy?

For dev machines it's not such a tall order to require `rg` be installed these days.


One advantage of a scripting language for scripts is that you can read it and see what it does one month later...


Sure, especially every bash script that goes over 200-250 lines is super readable. /s

Or when you have to start using all the combinations of characters to achieve f.ex. proper iteration through an array without word splitting. Etc. to infinity.

I've danced this dance hundreds of times and got sick of it. Gradually moving away from scripts and to Golang programs and so far it has been an improvement in almost every way, I'd say easily in 90% of the cases.


still more readable than disassembler output :)

sure if it's your "script" and you know exactly the code corresponding to it it might be ok, but that's a lot of overhead for a disposable script.


Are you arguing against my point or for it? :D


Not once have I worked anywhere where the people writing shell scripts didn't also control all of the boxen those scripts ran on.


I'm glad you never worked at a bank or an insurance company!


Why do you choose to write in a snarky way? Why does that make you glad? Why does this make you energetic?


It's tongue in cheek, and he's right. I am a old man Sunos/VMS/Linux admin. Having root used to be my god given right.

However I haven't worked at a company in years that gives anyone access to root anywhere except your own local machine or maybe in rare cases a dev box that is destroyed and rebuilt at will.


yea as soon as I read through the post, I ssh'd into one of my many Ubuntu servers, ran `ruby -v` and then noped out. From past experience I want nothing to do with trying to wrangle RVM or rbenv and then making sure the paths work properly.


Nowadays `apt install ruby` on an ubuntu box will give you a reasonably up to date ruby that's more than adequate to run scripts. This is not like the old days where a script written on Ruby 1.87 would break on 1.9.


> I sometimes wonder why we don't see ruby used for shell stuff more often.

The reason we don't see Ruby used more for shell stuff is because Python won this particular war. It's already installed on basically every Linux distribution out there, and this simple fact outweighs all other language considerations for probably >95% of people who are writing shell scripts in something that isn't Bash.

Personally, I don't much like Python, and even though Ruby is not my favorite language either, I find it much better than Python for this kind of work. But I don't get to decide how Debian builds their infrastructure, so in the end, I tend to use Python.


Yes, Python won the war, which is a pity. Linux distributions started getting bloated at the same time they switched to Python for everything. Yum hanging inexplicably and such things never occurred before.

The BSDs do not have this problem (yet!). I hope they stay sane and keep using Perl/sh.


Yum hangs not because of Python but because Fedora's RPM metadata is bloated compared to other distros so yum has to load and process much more data.


This is also the reason perl was used before python began to dominate. It was installed everywhere before python was installed everywhere.


this whole argument is silly. In my time on this site, I have seen someone suggest that every language is good for shell scripting including C.

Python and bash are used in the real world most often because convincing your sysadmin/infra/boss guy to install ruby for one script is a hard sell when you already have good-enough tools built into the system that don't add risk/complexity.


How hard is to install it though? That doesn't sound like a reason not to use it.


If a client has certified a specific Linux distro as an approved platform, that's what we use.

We can either deliver a single executable (Go) or a Python script, as python is preinstalled on their distro.

If we'd want to use Ruby, it'd be a huge hassle of re-certifying crap and bureauracy and approvals and in that time we'd have the Python solution already running.


Without a root account or inclusion in sudoers list, quite hard. There's millions of people that don't control the machines they work and spend most time with.


Significantly harder than doing nothing


Depends on you, your team, your target hardware/os, your project, and many other factors. Considering all of those things, the hurdle of installation might just be too large for it to be worth it.


It's not. This is a non-issue. Every web shop is writing bash to twiddle a build script on servers they also manage, which includes the ability to install any package they want.


How hard is it to install anything? That really isn't the point.


Seems like a lot below and around bring this as the main point for not using it. Which doesn't make sense to me.


Mitigating the risk of downloading a script from the internet and executing it -- even from a "trusted" website or package manager -- is absolutely a good reason not to use it.


Any decent distro has it. So you don't need to execute any random scripts, just install it or prepare the image with it for your OS install. That's it.

I don't really get this whole defaults being a blocker for tools choice.


And of course, it is impossible to install additional interpreters on the computer.


I never started using python, ruby or node because all of them were a pain to use for me - this was 7-8 years ago, so maybe it has changed a lot. But even 2-3 years ago I had lots of issues just running one python project. Module, not module, pip or not...

Way too confusing, compared to go for example. Or hell, even Java/Kotlin when you use an IDE and it autoconfigures most things.


> It's already installed on basically every Linux distribution out there,

PEP 668 pretty much negates this though. To do anything you need a python environment set up per script/project w/e


Iff you’re going beyond stdlib. Which lots of useful python programs don’t need to do.


Even if you don't want to limit yourself to the stdlib, you can still use a zipapp : https://docs.python.org/3/library/zipapp.html


You need to introduce a build and release process then to do this then which still detracts from it being simple or the selling point being it's already installed.


With cpan it's the same, you need to install deps somehow.

You don't need it if it's just a python script with stdlib, just like with raw perl.


Python ships with venv support. It’s not that difficult to bootstrap a venv before running your script, and that’s only if you actually need tooling other than stdlib, which you probably don’t.


It's definitely clunky and tedious to switch between every projects or scripts environment.

Idk why people are pretending there aren't tons of useful libraries out there. Like if you want to script anything with aws, use yaml


There are plenty of ways to have the venv automatically activate (and de-activate) when you enter/leave the directory for the project. direnv [0], mise [1], or various shell hooks.

There are useful libraries, I’m not saying there aren’t. I just dislike it when people include one as a dependency when they really didn’t need it.

[0]: https://github.com/direnv/direnv

[1]: https://github.com/jdx/mise


I think golang is used because you can easily create a single static binary, which is incredibly easy to distribute. I often find non-trivial CLI tools written in Python cumbersome because of the dependency wrangling necessary.


I think one of the advantages of a script is that you can quickly check what it is doing by simply opening it - an executable won't afford that.


Plus it can be run on any machine, while golang needs to be compiled for the specific architecture you'll be running it on. No messing about trying to get the right build.


Go is child's play to build (and cross-compile).

Simpler than having to worry about Python versions, let alone dependencies.


I actually think its a less of a problem than many imagine. If you have different architectures it actually is better and more predictable because it's compiled, also it's incredibly easy to compile even for noobs


I spent a weekend going through all my old python scripts with Gemini and ChatGPT, rewriting them to Go just because of this.

Most of them were so old that I would have had to skip like 3 generations of package managers to get to the one that's used this year (dunno about next year) if I wanted to upgrade or add dependencies.

With Go I can just develop on my own computer, (cross)compile and scp to the destination and it'll keep working.


> I often find non-trivial CLI tools written in Python cumbersome because of the dependency wrangling necessary.

I'm thinking of trying out Mojo in large part because they say they're aiming for Python compatibility, and they produce single-file executables.

Previous to that I was using PyInstaller but it was always a little fragile (I had to run the build script a couple of times before it would successfully complete).

Currently I'm using pipx and Poetry, which seems pretty good (100% success rate on builds, and when my 5-line build script fails it's because of an actual error on my part).

Which is a round-about way of asking everyone:

Does anyone have any other good way(s) to build single-file executables with Python?


Fun fact, you can use D language as compiled scripting using rdmd with powerful and modern programming features although it has much faster compilation than comparable C++ and Rust [1]. The default GC make it intuitive and Pythonic for quick scripting more than Go. Its recent native support for OS lingua franca C is the icing on the cake [2].

From the website, "D's blazingly fast compilation allows it to be used as a high level, productive scripting language, but with the advantages of static type checking" [3].

[1]Why I use the D programming language for scripting (2021):

https://news.ycombinator.com/item?id=36928485

[2]Adding ANSI C11 C compiler to D so it can import and compile C files directly:

https://news.ycombinator.com/item?id=27102584

[3] https://dlang.org/areas-of-d-usage.html#academia


Scriptisto is an underrated tool: https://github.com/igor-petruk/scriptisto

It can do the Python venv stuff behind the scenes for you and it just looks like a single Python file.


You could try Nuitka [1], but I don't have enough experience with it to say if it's any less brittle than PyInstaller.

[1]: https://nuitka.net/


I've been waiting for a single executable interpreter for Ruby for a while now

like deno or bun, but for Ruby

artichoke ruby is the closest we've got


Bootstrapping and different behavior for different versions and not being able to use the dependency ecosystem really make it a lot more difficult than people realize if you’re trying to “script” at scale.

I’ve used rust for this task but people get mad that I’m calling it a “script”. “That’s not a script that’s a program” which…sure. But so maybe we need another term for it? “Production-scripts” or something.

My experience is rewriting Ruby and bash buildpacks for the open spec CNCF Cloud Native Buildpack project (CNB) https://github.com/heroku/buildpacks

I agree that Ruby is easier to start and grow complexity, that would be a good place to start.


This complaint comes up enough that I'm surprised nobody's created the Ruby equivalent of GraalVM, to compile a Ruby script, all its deps, and a WPOed subset of the Ruby runtime, into a native executable.


Given Ruby's lackluster type system, the amount of assumptions a compiler can take is significantly reduced. Moreover, analyzing that at a simple IR level would prevent it from understanding of what is actually referenced vs what isn't without making compile times take eternity and performing complex flow analysis with simulation.

Even with a good type system, a trimmer/linker has to be enlightened of many special idioms and patterns and perform flow analysis, and in the case of dynamically typed languages or languages with reflection - to analyze reachability of reflectable members and trim otherwise spaceous metadata. It took significant amount of work in .NET's ILLink in order for it to be able to produce as compact binaries/libraries as it does today with .NET's AOT build target, and it still required a degree of metdata compression and dehydration of pointer-rich data structures (something which, funnily enough, Go doesn't do, resulting in worse binary size).


It's not quite what you're describing, but TruffleRuby is Ruby on GraalVM:

https://github.com/oracle/truffleruby

Unlike GraalVM Java, as far as I can tell TruffleRuby doesn't provide a bundler that can create a single executable out of everything, but in principle I don't see why it couldn't.


Worth noting that the GraalPython implementation does support creating a single binary.

https://www.graalvm.org/latest/reference-manual/python/stand...

I'm not sure I'd try replacing shell scripts with natively compiled Python binaries. That said, I use a Kotlin Scripting based bash replacement in my own work that has many useful features for shell scripting and is generally much more pleasant. You have to "install" it in the sense of having it extracted somewhere, but it runs on Win/Mac/Linux and can be used without root etc.


I wasn't so much imagining each shell script being replaced with a binary, as I was imagining deploying a single static binary "sysadmin-DSL interpreter binary" — where that "interpreter" is just "Ruby with a custom prelude, all packed together" — such that I could then name that interpreter in the shebang line for my admin scripts written in that DSL.


Graal can create an executable from a ruby program as well with TruffleRuby and native image (both part of the general graal project).


WPO?


While Program Optimization, in this case mostly meaning dead-code elimination for any runtime code not called by the Ruby code.


I think you mean Whole Program Optimization.


The ability to type check and unit test your code is also valuable. This is possible with many languages but with Go it requires basically zero configuration.


Bundle it into a pex and distribute that. Its still way large but its easy to distribute.


Pex was also the solution I landed on after evaluating several non-container options for distributing a Python project to arbitrary Linux hosts.

It works well but with one huge caveat: although you bring the stuff required to reconstitute the venv with you, you’re actually still using the system’s python executable and stdlib!! So for example if you want to make a project targeting all supported Ubuntu LTS versions, you have to include the wheels for every possible python version you might hit.

Ultimately this boils down to there not really being a story for statically compiled python, so in most normal cases you end up wanting a chroot and at that point you’re in a container anyway.


Nuitka has worked for me for everything Ive tried (in house dev tools). I didnt end up using it for work because I can rely on a pristine system Python with the right version so pex makes more sense.

There are other options I didnt look too much into, e.g. Beeware


I wish an easy cross-platform PEX or shiv [1] were a thing. Binary dependencies are the biggest reason I prefer the new inline script metadata spec (https://packaging.python.org/en/latest/specifications/inline...) and `pipx run`. Luckily, they're pretty great. They have changed how I write Python scripts.

The way inline script metadata works is that your script declares arbitrary dependencies in a structured top comment, and a compliant script runner must provide them. Here is an example from a real script:

  #! /usr/bin/env -S pipx run
  # /// script
  # dependencies = [
  #   "click==8.*",
  #   "Jinja2==3.*",
  #   "tomli==2.*",
  # ]
  # requires-python = ">=3.8"
  # ///
pipx implements the spec with cached per-script virtual environments. It will download the dependencies, create a venv for your script, and install the dependencies in the venv the first time you invoke the script. The idea isn't new: you could do more or less the same with https://github.com/PyAr/fades (2014) and https://github.com/jaraco/pip-run (2015). However, I only adopted it after I saw https://peps.python.org/pep-0722/, which PEP 723 replaced and became the current standard. It is nice to have it standardized and part of pipx.

For really arbitrary hosts with no guarantee of recent pipx, there is https://pip.wtf and my venv version https://github.com/dbohdan/pip-wtenv. Personally, I'd go with `pipx run` instead whenever possible.

[1] I recommend shiv over PEX for pure-Python dependencies because shiv builds faster. Have a look at https://shiv.readthedocs.io/en/stable/history.html.


All else being equal I’d probably prefer poetry for the broader project structure, but that would definitely be compelling single script use cases.


You can also combine the two. Something I have done is script dependencies in inline script metadata and dev dependencies (Pyright and Ruff) managed by Poetry.


Most shell-ish scripts probably use no dependencies and will not be picky about exact version


Mmm. I’d argue that all shell scripts use a ton of dependencies that are different across Unix/Linux.

I. E. ‘sed -i’ is only in GNU sed. Same with ‘grep -P’.


I took my parent post as meaning the app language dependencies. Pip packages or ruby gems or "must be exactly python 3.12"


Correct but if you're in a situation where this is a issue you probably know about it and can use POSIX versions that are more portable.

Otherwise nobody thinks of it because most likely it is not being distributed.


This has bitten me many times


It’s why if my bash turns into more than a page or two I start re-evaluating it and turn it into python


Tons of scripts rely on coreutils (sed, awk, grep, head) to manipulate data.

All of those have wildly different behavior depending on their "flavors" (GNU vs Busybox vs BSD) and almost all of them depend on libc being installed.


That's not my experience at all. Shell is often glue between different utilities and unless it's being run in a controlled environment like a docker container, you have no idea what's on the base machine.


I only touch Go when using tools CNCF projects decided to write in Go.

Other than that, OS scripting is done in traditional UNIX tools, or Powershell.


Golang's not so secret weapon


Hard to miss those >100 meg 'single binaries'


[flagged]


Every compiled language can do it until you run into issues with glibc vs musl or openssl version or network stack defaults and remember you weren't as static as you thought.


> ... maybe extract those files to required files, add gems, whatever.

CPAN is the killer feature of Perl. It just works. First off, most of the time I don't need a CPAN module for doing shell scripting in perl. Perl itself is rich enough with the file manipulations that are needed for any script of less than 100 lines.

My experiences with Ruby and installing gems have been less pleasant. Different implementations of Ruby. Gems that don't compile / work on certain architectures. Breaking changes going forward where a script that was written 2 years ago doesn't work anymore. Sometimes it's someone was doing something clever in the language that doesn't work anymore. Other times its some gem got updated and can't be used that way anymore. ... which brings us to ...

I believe that Go's advantages come into play when the program gets more complex that that 100 line size and it becomes a "program" rather than a "script" that has complexity to deal with. Furthermore, executables built in Go are most often statically linked which means that someone upgrading the libraries doesn't break what is already working.


Does anyone under age 50 or so even know Perl?


Make it 30 and it's actually a question.


I'm not even 40 and I remember it well enough.


Yes.


Do you all order the same thing when you get together at the cafe?


Instability. Ruby has not been the same language for very long. Migrating to 1.9 was a huge hassle for many firms. This may seem like a long time ago in tech years; but then there was Ruby 2.0; and shell scripts, meanwhile, have stayed the same the whole time.

A secondary reason is that Ruby has been very slow for much of its life, which means that for situations where you need to run a huge stack of scripts -- init systems, for example -- it would be punishing.

Ruby does have a terse and intuitive syntax that would make for a good system shell. Although it has some magic, it is less magical and confusing than shell itself. Ruby provides many basic data types that experience has proven are useful for shell scripting -- like arrays and dictionaries -- and they are integrated in a much cleaner and clearer way than they are integrated into widely used shells like Bash.

System tools that are written in Go may still make sense to write in Go, though. Go, it is true, does not have a nice terse syntax for short scripts and one liners; and it doesn't have a default execution model where everything is in main and so on; but that is because it is not a scripting language. Other languages used to write system tools and system services -- like C, C++, Java and Rust -- don't have those things either.


> Migrating to 1.9 was a huge hassle for many firms.

This seems contrary to my experience. We took a large project from 1.8 to 1.9 to 2.0 to 3.0, and it was much easier than we expected. It was a lot easier than our Python 2 to 3 conversations were.


> It was a lot easier than our Python 2 to 3 conversations were.

Python's is (present tense very much intended) notoriously one of the worst-managed transitions in programming language history, so that's not exactly a ringing endorsement.


It is completely relevant, because it’s arguing against the claim that Ruby was unstable once back in 2005.


I didn't say it wasn't relevant, I said it wasn't a ringing endorsement. It can have been better than Python and still completely unbearable.


The comparison to Python isn't the relevant one. During that time, what was it like to migrate from shell to...probably the same shell?


Your first two points don’t seem valid, in my experience.

The Ruby 2.0 migration wasn’t that interesting from a compatibility perspective; it certainly wasn’t anything like Python 2 -> 3.

And Ruby is __not__ slow compared to bash. I don’t where these myths get started, but someone needs to justify the Ruby-is-slow thing with actual data.


> I don’t where these myths get started, but someone needs to justify the Ruby-is-slow thing with actual data.

As an outside observer of the Ruby world, I have an impression that it was Ruby MRI that was slow. CPU-bound synthetic benchmarks like the much-criticized Benchmarks Game showed Ruby ≤ 1.8 a good deal slower than CPython 2. Here is an illustrative comment from that time: https://news.ycombinator.com/item?id=253310. People also complained about early Rails, and the perception of Ruby's performance got mixed up with that of Rails.

Then YARV came out, and Ruby became several times faster than its MRI former self on different benchmarks (https://en.wikipedia.org/wiki/YARV#Performance). With YARV, Ruby gradually caught up to "fast" interpreted languages. Now interpreted Ruby seems as fast as CPython 3 or faster in synthetic benchmarks (for example, https://github.com/kostya/benchmarks/blob/7bf440499e2b1e81fb...), though still behind the fastest mainstream interpreters like Lua's. Ruby is even faster with YJIT.


> much-criticized

"There are only two kinds of languages: the ones people complain about and the ones nobody uses."

> showed Ruby ≤ 1.8 a good deal slower than CPython 2

It's taken me a couple of days to remember web.archive.org :(

CPython 2.5 vs ruby 1.8.6 (2007-03-13)

https://web.archive.org/web/20070219190706/http://shootout.a...




I'm not sure why people keep comparing to Python. Python is not replacing shell either.


Alternatively, you can use Crystal instead of Go. Its syntax is almost Ruby one, except mostly for some typing. Standard library is also similar. Binary size and speed is also go-like


Oh, and there is a new interpreter although probably it's not production quality yet


> Ruby does have a terse and intuitive syntax that would make for a good system shell.

I learned enough PowerShell to be comfortable using it, and then picked up Bash and Ruby a few years later.

I longed for a Ruby shell for a couple years.


Free name idea for whoever makes the Ruby Shell: "Rubish"


Mentioning 1.9 migration and ruby being slow? Python 2 to 3 was waaaaay worse and more negatively impactful, and equally slow (slower in most cases).

Ruby never had US market penetrative as perl or python, which were basically invented in the US, and congregated people from the academic realm. These things aren't decided based on meritocracy (no things ever are).


> Mentioning 1.9 migration and ruby being slow? Python 2 to 3 was waaaaay worse and more negatively impactful

Python 2-to-3 was mainly worse than Ruby 1.8 to 1.9 because Python had already won, and had a much bigger and more diverse ecosystem.


That's disingenuous. Python 3 was released around 2008, rails popularity was still rising. The community refused to upgrade for at least 10y, and several prominent libraries took as much to provide first grade python 3 support.

Ruby 1.8 to 1.9 migration was by contrast way milder. It took 6 or 7 patch releases and around five years to release 1.9.3, the first from the 1.9 series people actually considered stable, but after that the community migrated because it was *significantly* faster than 1.8 . Python 3 on the other hand was slower overall than python 3 at least until 3.6. The fact that the community stuck with python through it all does say a lot about human psychology and sunken cost syndrome.


Compare Ruby to shell, not Python, when considering what I wrote above.


> I sometimes wonder why we don't see ruby used for shell stuff more often

The best piece of code that I worked on was an ETL in pure Ruby. Everything in modules, simply to read, no crazy abstractions, strange things like __main__, abstract clssses or whatever.

Maybe others can chime in, but the main difference that is found in ruby developers is that they really have fun with the language making everything with a higher lever of software craftsmanship that other folks in the data space, e.g. Python of Julia.


It wasn't that long ago that all the interesting infrastructure projects (vagrant, chef) were written in Ruby.


I'd argue that writing Chef in Ruby (and Erlang) was absolutely to its detriment. Yeah, it was popular. It was also a debugging and scaling nightmare (not that Opscode helped that any).

In fact one of the reasons I rage quit megacorp for a second time was that I was required to use an Enterprise Chef instance that would log people out at random every 0-3600 seconds. I could throw plenty of deserved shade at my coworkers but Opscode didn't understand their product any better and I wasted more than enough time on conference calls with them.


I love ruby, and I'm using it for 18 years, but I've spent half a year on chef a decade ago and it was one of the worst wastes of time I had ever. Nothing to do with the language, everything to do with architecture of the thing.


Shopify and GitHub are still mostly Ruby, right?


Yes. Shopify, GitHub, Stripe, GitLab and more, but in large part due to Rails, not Ruby specifically (although Ruby is one of the great things about Rails).


fluentd today is a popular (most popular?) log collector in k8s land.


A lot of people use fluentbit specifically because fluentd doesn't perform well.


Wow didn't know fluentd is a Ruby production. Who said Ruby is slow?


It still is a slow language that does not offer anything over competitors that are an order of magnitude faster to justify its performance characteristics.


You missed out GitHub !


You can kind of figure it out by skimming the comments here. Most mainstream languages have decent-to-great tools built in for scripting, so the difference isn't that huge. So people just prefer to script in the language they already prefer in general, or that the project is written in.


> meanwhile, a lot of tooling nowadays is written in Go, and I have no idea why

No-dependencies final static binary.

> it's not friendly for os manipulation at all

If you say so. I'd love to hear how did you get to that conclusion.

> and number crunching power is not needed in many, many tasks of that sort.

You are aiming very wrongly, it's about startup time. I got sick of Python's 300+ ms startup time. Golang and Rust programs don't have that problem.


> why we don't see ruby used for shell stuff more often

Simple, ruby is not installed by default. Even Python, while it is on (almost?) all modern Linux distributions, is not installed on the BSDs.


Maybe this is something that we need to deal no matter what are using (perhaps except if we use sh).

Alpine for example doesn't ship Bash. Mac OS ships Ruby and its Bash is quite old.


Mac stopped shipping Ruby some time ago.


Can't you just install it?


Python's also not installed by default in (most?) official docker images.


Docker images fill a different role. They shouldn't have everything installed on them as that broadens the attack footprint. They should be doing one thing, and one thing only. If it's a "run this executable that was built" - then only what is needed should be there.

Installing python and other general purpose tools gives any attacker that gets into a docker container many more tools to work with for getting out.

For docker, the trend isn't "build a general purpose machine" but rather "what can we slim this down to that only has the bare minimum in it?" This can be taken all the way to the distroless images ( https://github.com/GoogleContainerTools/distroless ) and means that the security team won't be asking you to fix that CVE that's in Python that you don't use.

If, however, you do need python in an image because that image's purpose is to do some python, then you can pull a python image that has the proper release.


> meanwhile, a lot of tooling nowadays is written in Go, and I have no idea why, it's not friendly for os manipulation at all

I'm not sure where you're going with this: My experience of Ruby and Go is that:

1. Go is a lot easier to do OS manipulation type stuff.

2. Go is a lot easier to modify down the line.

TBH, #2 is not really a consideration for shell-scripts - the majority of the time the shell script is used to kick off and monitor other programs, transforming an exact input into an exact output.

It's glue, basically, and most uses of glue aren't going to require maintenance. If it breaks, it's because the input or the environment changed, and for what shell is used for, the input and the environment change very rarely.


What tooling do you use that’s written in Go? I’d have said that Python is the most popular language for tooling, by a country mile.

The only tooling I know that’s written in Go is Docker.


lots of CLIs are written in Python, absolutely, but many started more recently are almost exclusively Go unless there is serious interest in using Rust. It's almost certainly the ease of cross compilation plus the ability for users to run it without changes to their system.


This is the key.

I can easily provide precompiled packages for all sane combinations and users can just download one executable, edit the config file and be running.

Instead of having to mess with virtual environments and keeping them updated (they tend to break every time you upgrade the system python version).


Just because you don’t see it doesn’t mean it’s not the most-used shell scripting language. For example, when I was at AWS it was used for templating in something like 90% of all pipeline tooling


> meanwhile, a lot of tooling nowadays is written in Go, and I have no idea why

What? Go is used because distributing a static binary without any dependencies is way better than asking each and every user to download an interpreter + libraries.


So stop using 3rd party libraries. Seriously, the number of times I’ve seen people importing requests to do a single HTTP GET, or numpy to use a tiny portion of its library is absurd. You can do a hell of a lot with the stdlib if you bother to read the docs.


Not using third party libraries does not help against py2->py3 and changes between 3.x point versions.

It's only relatively recently that I could really expect that the target system would have python3, and then I'd also have to deal with some really annoying errors (like python3 barfing on non-ASCII comments when reading a source file with "C" locale, something that used to work with python2 IIRC, and definitely was an issue with "works on my machine" devs).

venvs are horrible, even compared to bundler.

But the python2 era left imprint on many who think it's just going to be there and work fine.


because there's Python




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: