I've gotten a lot of mileage out of Sigil (https://github.com/danmx/sigil#readme ) which supports starting sessions via Name tag, instance-id, or private-dns-name in order to save one the need to use awscli in a lot of cases; it also supports the handy `sigil ls` to show the connected instances, since trying to start an SSM connection with an instance whose agent is offline produces a dumb error message with start-session
---
as an aside: `function name()` is redundant; the `function name {` syntax is a bashism, `name() {` is the posix syntax
Oh, that's interesting! Didn't know AWS had that ability. Maybe then there are also some SDK functions I don't know about? I wonder why Packer doesn't go this route.
Super tiny downside to your approach: you'll be paying for storage of that instance while shut down, I guess. But that's probably peanuts.
I was about to comment that I looked into this a while back and they thought it would be too complicated to implement... but it seems that it was actually implemented [0] earlier this year. I haven't tried it out but that seems quite promising to me.
I submitted this elsewhere myself, and added a little extra blurb, so I'll just quote it here:
> This is very much a ‘release early’ type thing. I’ve mostly been testing with it, and not yet seriously using it.
> Besides the cases mentioned in the readme, I also want to use this to automate on-demand Nix builders, because Nix only understands SSH for remote builders. Locally I run Mac, and I sometimes need to do a Nix build for Linux. Similarly at work, we have a build server that I want to do ARM builds on, so I can eventually deploy on t4g.* EC2 instances.
I did something like this[1] for a CTF I hosted. It spins up a pod in Kubernetes with a predefined set of images based on the username. e.g. `ssh TeamCode:CHALLENGE_NAME@ssh.coop-ctf.ca` would spin up the container for that team with the image for the challenge.
Since the container could be used by multiple team members, I had an external scripts to tear down after 30min of inactivity.
This did cross my mind! Specifically, just forwarding to something other than TCP. Doesn't sound impossible if we change interfaces a little bit.
Thinking, we can make dialing the provider responsibility. I didn't want to block the goroutine there, but it could just spawn another temporary one for dialing.
Then the manager can be generic over ReadWriter, and optionally try if whatever it has is also Closer.
Beyond that, I think Docker talks gRPC?
P.S.: Note that I made this AGPL. If you planned on creating some sort of product out of this, best start fresh from the golang.org/x/crypto. Might even be simpler in the end, because I feel like a big part of lazyssh is management of resources, which in Docker is (probably) local, cheap and almost instant.
Title should be changed to reflect that this is for _virtual_ machines:
>LazySSH is an SSH server that acts as a jump host only, and dynamically starts temporary virtual machines.
>If you find yourself briefly starting a virtual machine just to SSH into it and try something out, LazySSH is an attempt to automate that flow via just the ssh command. LazySSH starts the machine for you when you connect, and shuts it down (some time after) you disconnect.
I would love for a capability like this to be extended to physical machines. I have a dozen SuperMicro servers with IPMI interfaces, an HP server with fully licensed ILO, and a Dell server with a BMC. Being able to control when these go on/off would be very nice.
It is VMs at the moment, that’s true, but perhaps only because it’s MVP functionality. If you can start/stop those from some Go code, it could work I think?
I used to run a dedicated vm just for this (jump host), now I just run Tailscale on all of my machines. I have nothing but good things to say about Tailscale, it just works.
- Create an EC2 instance in a private subnet, and assign the AmazonSSMManagedInstanceCore IAM role to it
- Install the AWS CLI tools on your desktop
- Add a function to your .bash_profile like this:
Then just run "jumphost" from your terminal and boom, SSH'ed in via the magic of SSM.Bonus points: add a cronjob to your jumphost to shutdown every X hours in case you forget ;-)