Hacker News new | past | comments | ask | show | jobs | submit login
XXH: Bring your favorite shell wherever you go through the SSH (github.com/xxh)
120 points by crummy on Dec 14, 2021 | hide | past | favorite | 49 comments



This tool demonstrates that POSIX shells are at the wrong layer of the stack.

The shell should be part of the user interface (e.g. the terminal emulator or console), not a program running on the remote system.


I don't think so. As much as I like oh-my-zsh, I wouldn't want its massive code base and attack surface to be forwarded to sensitive or critical servers.

Shells require integration with the rest of the system, which involves automatically executing certain scripts based on program availability for autocomplete. That's difficult to do when you don't know what kind of system you're remoting into. You can't reuse the same configuration for remoting into an old Ubuntu 12.04 machine and for remoting into a Windows 11 powershell prompt without losing most functionality you want out of a more-than-basic shell.

If you want a unified shell, you'd probably want to set up a layer between the user interface and SSH (on both sides) to make this possible.


> I wouldn't want [oh-my-zsh's] massive code base and attack surface to be forwarded to sensitive or critical servers.

I think we agree here. I said "The shell should [not be] a program running on the remote system."

> You can't reuse the same configuration for remoting into an old Ubuntu 12.04 machine and for remoting into a Windows 11 powershell prompt without losing most functionality you want out of a more-than-basic shell.

Why not? Why couldn't a single shell program understand PowerShell objects, Windows and Linux environment variables, and Bash/Fish/Zsh completion scripts?


I suppose it could, but you'd be building several shell languages into a single program to support it, including support for the necessary aliasing and macros that system shells (ab)use for system configuration.

I think the system you propose would require a fat server to properly serialize the necessary contents back to the user's shell. In effect you'd be writing a replacement for an ssh server with deep integration into whatever shell the user is running locally. The alternative, constantly dumping the shell state and interpreting it, would be easy to get wrong, and desyncs of local and remote state could have a severe impact on the commands you run.

It's not technically impossible, just very impractical. I think it also goes against the philosophy of "do one thing and do it right" because of all the moving parts.


> including support for the necessary aliasing and macros that system shells (ab)use for system configuration

I think we could get away without system-configured aliases and macros.

> I think the system you propose would require a fat server

Yup, this sounds about right.

> I think it also goes against the philosophy of "do one thing and do it right" because of all the moving parts.

Agreed. But Clang+LLVM and Visual Studio Code show that discarding this philosophy can lead to a successful project anyway.


It is part of the user interface of the server as it is.


I think what the commenter is saying is that you shouldn’t “remote in” to a shell; you should use your regular old shell and have it send commands to a server and display whatever it receives back to you. The shell should be an abstraction over a computer; it shouldn’t be integrally tied to any particular computer as they are today


But you can't move the shell across layers. It's needed where it is.

The point of the shell is that it's how you issue commands to the computer. You (or GP) want to make the commands independent of the shell. OK. How do you issue your new shell-agnostic commands? How is the computer supposed to understand them?

For that to happen, you'll need to implement a shell in the exact place where you just removed the first one.


I implemented that as a joke, but basically it can work like this: https://github.com/viraptor/libremotec

The app captures syscalls working on FDs and forwards those to the remote side. You could do that in much more clever ways. You can also do it in hacky simple ways like for example ansible does, by just running each command remotely.


A shell has three parts:

* the command editor * the command interpreter/parser * the fork()/exec() calls

I think you're talking about the fork()/exec() calls. I'm primarily talking about the command editor.


OK, so on the server we have a protoshell running which accepts commands and makes system calls.

(I don't think your three-part listing makes sense: commands are the input to the shell, and system calls are the output. But you can't separate them -- without input, what output are you producing?)

And now, on the client, we have a command editor running which accepts keystrokes and produces commands.

But the client runs on a different machine. It doesn't know what commands the protoshell over on the server can accept. How does it know what commands to produce?


> The shell should be an abstraction over a computer

Except shell itself is an abstraction over a computer by encapsulating system APIs.


You can do that by running “ssh … — your-command some-arg other-arg | other-command-locally”

It only gets hokey when you reference files as it’s all remote.


...which of course is an option that Unix provides. "ssh <hostname> <command>" does exactly this.


... while sacrificing features like auto-complete, shell awareness (Git branch and other doodads people love to put in their prompt), and persistence (screen/tmux/mosh).


Why ? You have the browser for such things. In UNIX a shell is a shell, a console is a TTY (that's where the output of a shell mostly goes) and a terminal emulator is a TTY emulator.


> You have the browser for such things.

I think the use case of a browser and a shell is quite different. I don't know what a browser would look like if it had an easily-scrollable log like a terminal does.

I think you have a good point about a browser being a GUI version of the shell. I'm not proposing a GUI shell, though, but a shell which lets me run TUI/console programs.


> program running on the remote system.

Simple is better than complex.

Sure the shell might have looked amazing like a browser but why


plan9 achieved exactly this, and with a far simpler design.

Backwards compatible is better than simple.


> Sure the shell might have looked amazing like a browser but why

So you don't need to copy your config to the remote server, like XXH does.

So you can still edit your command when your wifi gets spotty.


hardly more complex. imagine if the operating state was expressed in some common data format - just imagine tables. and the shell then is my local preference that interacts with that data model.

is that less or more complicated than the mishmash of oddly designed kernel interfaces we have today (fork. mmap. ptrace....so much of it is a mess)

not only would that clean up the lot, and allow for a very straightforward remote indirection, but would open up all kinds of lovely usages like being able to trivially capture and replay command streams. interposition. general purpose rewrites. a well-defined domain in which to upgrade and downgrade the interface schema. and more things I'm sure I've never thought of.


And imagine that you have a bug. And that you trash the filesystem remotely but by you it looks ok. Nobody asks you to use a shell with ssh. You can use X or (when or if will ever suport such an outdated feature) wayland.


Portability of shells/binary injection and the associated layers often come with extreme vulnerabilities that are often not officially discovered for years. Convenience is often the reason many new vulnerabilities are introduced. Cool project for hobby purposes but be cautious when using this sort of thing on production systems especially on those housing data of any importance.


At my company this would be auto-flagged and you’d have the intrusions team on your ass inside of 5 minutes. :D


If they don't want you to fling binaries into your home directory they should just mount it noexec. At any rate it doesn't seem like a fantastic security/ability-to-work trade-off. Limit the privileges of the account or you're one exploit away from trouble.


Many redundant layers are used to prevent and/or detect security violations, but “don’t install random binaries on prod machines” is a basic tenet. If you need certain tools to do your job, deploy them via proper controlled and audited mechanisms, not via scp.


“But I saw it on hackers news and installed via homebrew.” /s


(No relation to XXHash (XXH32, XXH64, XXH128, XXH3): https://github.com/Cyan4973/xxHash )


If your favorite shell is Emacs, it already supports remote access using TRAMP.


I assume you're talking about eshell and not vterm.


Both of those, and also regular M-x shell, work remotely with TRAMP.



So not my favorite shell, just a different build of zsh than what is available locally...


This seems cool but I'm disappointed that it just sends a program to the other side rather than rewrite commands and operations to sh syntax.


Nice, I did a similar hack some time ago: pack your favorites functions/aliases in b64, unpack and do some mkfifo on the server, write that fifo path to ENV, and call "exec bash --posix". zsh and ksh also work, as they also have some options relying on ENV.


Is there a description somewhere as to what it's doing under the covers?

A simplistic version could be just to ship a static binary of your shell over, like:

tar -cf - ./bash-binary | ssh $host 'tar -xf -;./bash-binary -c "echo hello world"'


From the wiki, essentially that:

https://github.com/xxh/xxh/wiki#how-it-works


yea

why would you tar-up a single file w/o compression thou?


It doesn't seem to work as I expected - I run xxh user@myhost +s zsh. I just get a blank zsh prompt without any of my dotfiles or anything. Is it supposed to be configured somehow beyond this?


It seems they have prerun commands to copy your dotfiles, but you have to specify that you want them. There's an example in the README.


when would you use this?

Sounds like an awful way to distribute and run software...


Lots of reasons this can be useful.

At my company we sometimes have to SSH into our dev nodes to restart things, figure out what's going wrong, etc. But we wipe them out nightly to update the code.

This tool would let you keep all your special aliases for working on the nodes without manually copying your config(s) over.

Can't say I'd use it, but I can see use cases like the above.


Why not just have the instance run a script to fetch a new .rc file from a local repo? Or better yet just include the aliases in the profile/rc file on the image itself.


The instance would need to have extra credentials for that I suppose, and you'd need the repo up somewhere, then track everything with git.

Lots of engineers are also using the nodes, so you'd need to share that repo with them then have some command you type in each time to load it after you ssh into the node.

And you'd only have updated versions of your rc every night, unless you want to commit a change, push, go to the server to pull it down, reload the shell.

I guess what I'm saying is this tool fills a niche and avoids all the above complexity just for some shell aliases.

(btw I don't really have a use for it, I just have 2 or 3 commands to copy paste into the shell for 99% of what I need to do. Just maintaining the argument to hear alternatives from people!)


As long as the commands are not secret, you could have your terminal just fetch them from a web server after logging in via the console and source it if the hash checksum of the page equals an expected value. If the checksum does not meet the expectation you either updated the commands in the file without updating the expected hash or the file server has been compromised.


For every use case I can think of, there are several better ways of achieving the same goal.


What does it have to do with distributing or running software? What did I miss from the README?


It's a program whose only purpose is to download/build some executable files, transfer them to a remote host, and then execute them. That would seem to encompass both "distributing" and "running" software.


I only got the "transfer them to a remote host, and then execute them" bits, my bad. Probably because I see no reason for the first two steps.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: