Hacker News new | past | comments | ask | show | jobs | submit login

Not to bash <groan> on this, but the real "best" of bash (well, shell scripting in general) is the simplicity with which you can connect processes via pipes. Ruby's metaprogramming foo is strong enough that this should be fairly straightforward to make easy - can we see this please?



One of my favorite Ruby post series of all time:

Pipelines using Fibers in Ruby (1.9)

http://pragdave.me/blog/2007/12/30/pipelines-using-fibers-in...

http://pragdave.me/blog/2008/01/01/pipelines-using-fibers-in...

I swear it was actually a 3-part series, but only 2 parts I can find right at the moment.


Not bad. The syntax is still a bit funky at the end of part 2, but that's a pretty trivial gripe.


The best thing about bash, and by bash I mean sh if we're going to argue and I mean bash if we're not going to argue, is that it's already on every system and you can just plan on being able to use it with no further work. Pipelining is definitely awesome, but you can make furniture out of gluesticks using enough bash (and all those two letter commands that aren't bash but always infect the system along side it)


OK. I'll buy that. I don't necessarily agree, however. It's so trivial to get small programs onto a remote machine that relying on `sh` only because it's there seems like a premature optimization.


It's a tradeoff. There's also nothing stopping folks from rewriting init and rc scripts as Ruby code or (re)writing systemd in Go, Ruby, etc.


Ruby scripts ultimately are processes, and there's much borrowed from Perl including `ARGF`. Just don't use the clever Ruby syntax stuff unless one wishes to be bashed <groan 2.0> over the head.

It becomes more of a "what should this program should consume, do and output and feedback" problem... Bash, Ruby, whatever... but aim for some standards, in general.

Furthermore, with Ruby, as a dynamic language with full network capabilities and add-on packages (gems) with other IPC libraries (zmq), there are other structured, efficient serialized service consumer/producer configurations possible rather than only shell commands. There are also background job processing frameworks like sidekiq and parallel job processors like celluloid, parallel and EventMachine.

The future, really (for dev &| ops tasks) in on-demand loaded, micro-services rather than giant, slow frameworks. The glue (say chef code) and one-ofs will likely still be a bunch of Ruby code and hopefully more Ruby code, respectively, too.


Ironically, if the given example was written in pure bash, it would be violating best practices. You don't parse ls line-by-line because you can't make the assumption that filenames will map to lines.

Taking this contrived example at face value, imho, I'd instead place the "best of bash" for the particular task with file globbing.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: