Hacker News new | past | comments | ask | show | jobs | submit login
SSH tip: Automatic Reverse Tunnels for Workflow Simplification (codysoyland.com)
89 points by tswicegood on June 6, 2010 | hide | past | favorite | 19 comments



I might be a fossil, but this is what I'm using screen/sz for, because that even works if I don't have an SSH daemon running on my local machine.

Configure screen with

zmodem catch

and then ssh to the remote server from inside screen. If you found the file you wanted, issue sz <name of file> on the server. Screen will see the ZModem transfer and ask you where to store the file.


Or you can just use the SSH filesystem:

# apt-get install sshfs

# sshfs server:/remote_dir /local_dir


Very inefficient for commands such as find.


Assuming one is using a Linux client (or possibly one of the BSD flavours), then yes. For other systems reverse tunnels may be the only viable solution - apart from the logout+scp.

I haven't tried it, but this may work with rsync as well.


I use sshfs with Back In Time to manage my backups. Works like a charm!


Same but without having to edit your .ssh/config:

  ssh -R localhost:2222:localhost:22 remote
  scp -P 2222 /path/to/file localhost:~


...and if you forget to enter the setup string during the initial connection, at any time during your ssh session into the remote system, you can enter ~C then the setup string. So, to use the above port settings, when you find your file, you just type: ~CR2222:localhost:22 remote Then you can issue the scp command just as above, or... if on Solaris (or using an old OpenSSH on "remote"): scp -o Port=2222 file.tgz localhost:


Is there any way to get this transparently persisted to all of your SSH connections, no matter how deep you tunnel? Something like the way $DISPLAY is transmitted? That would be amazingly useful if so. I tend to do a lot of work bouncing around from one host to another and the ability to quickly "jump back" and reference my starting point would be incredible.


Nice tip! One small problem: if I set .ssh/config to do the RemoteForward, if I open up two shells to the same host, the second shell complains that it can't set up the remote forward (because the first shell has already set it up). Of course it's not necessary for the remote forward to be established twice--is there any way to get the port forwarding set up once, and only once, for a given host?


You can get this as a side effect of multiplexing connections, see ControlMaster in ssh_config. Multiplexing connections also makes opening a second connection to the same host much faster, so it's usually worthwhile on its own.

(Note that it's not perfect: in particular, the first connection to a host remains open until all collections have been closed. This is being worked on, IIRC.)


bnoordhuis posted a scriptable solution. You could write a wrapper script for SSH that passes arguments straight through, but first checks for your port forward.

Or, just write a script that forwards the port and run it before you start working.


On a related note, if you want to keep the connections open and get reconnected automatically on disconnection, try this autossh http://www.harding.motd.ca/autossh/

So I have something like

autossh -f -N -R 2222:localhost:22 home -S "none"

at my work machine(put into startup script), so I can always get back to the work machine from home.


There have been a number of posts lately about closing the ssh/scp gap, and with great reason. It's stupid annoying to find a file on a remote machine that you want locally (or vice versa) and have to open a new shell and start mucking around with paths.

Instead of all these hacks, it would be awesome to see support for in-session file transfer built into ssh/sshd.


This doesn't require any external tools, you just have to set up the config files. I think this is about as close to 'built in' support as you are going to get.


For this problem:

  scp $(ssh remote find -name 'fic.tar.gz') .
seems simpler. But of course there are another advantages to be able to contact the local computer from the remote.


I know it's OT, but I have a PC that is behind a firewall that I have no control over. I have a public server that I run/control. Can I use SSH to create a tunnel so that I can hit publicserver.com:port and have it route through a ssh tunnel initiated from my firewall'ed private computer?


If I remember rightly, issue this type of command at the private computer.

ssh -R 1234:privatecomputer:22 user@publicserver

So it's: secureshell, reversed, publicport to privatemachine:port, authentication+address for public machine.

Then traffic to publicserver:1234 should appear at privatecomputer:22

Perhaps there should be a nicer syntax like "ssh admin@publicserver.com:1234 => localhost:50"


You might try corkscrew (http://www.agroman.net/corkscrew/). It can tunnel a raw connection over many HTTP proxies. Some corporations block all outgoing ports, then use an HTTP+SSL proxy to filter and monitor all traffic (including SSL traffic). Use SSH over corkscrew through the proxy, with the ssh option to establish a local SOCKS proxy tunneled to the remote host, and you're good to go.


If the server is not using the https/ssl port 443 then you can run ssh server in that port and just ssh into it from the client since the firewall most likely won't block outbound web connections. Also see "firewall punching": http://www.h-online.com/security/features/How-Skype-Co-get-r...




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: