[github name] here should be replaced with github username of your friend or colleague. Really handy because I can just authorize them without a human request/response loop and manual key moving. Simple and no external tools needed. Normal caveats about authorizing people apply.
This was the API Ben Cox used to gather more than a million of public keys from (active) Github users to analyze their strength or weakness, which finally led GitHub to revoke the vulnerable/weak keys and notify the users.[0]
Of course, caveats about trusting Github also apply. (If Github got hacked, the page https://github.com/[github name].keys would serve the public key of a fresh attacker-controlled key and would trigger shortly afterwards an SSH login attempt with that key towards the client IP to do evil stuff.)
Not only GitHub but the specific user's account could be compromised through e.g. a weak or reused password.
In the general case, you should be fine. Even more so if the person whose GitHub account you are authorizing regularly checks their keys for suspicious activity.
That's almost exactly how we (optionally, of course) import your SSH key from Github at Userify.. (well, technically we're using Python requests, not curl, but otherwise exactly the same).
The beauty of doing it this way (writing authorized_keys) is that the accounts are completely independent from a third-party auth service. (ok, I guess technically that's what we are, but we're only managing the public keys, and not directly authenticating through a remote server like a directory or database.) SSH has a beautiful design.
This is really the correct way to accomplish what this project is trying to do. No central proxy server that you need to trust and can be trivially done with the commands you already have installed.
I would honestly highly recommend against using Teleconsole for those reasons.
No it is not. Connecting two machines behind two different NAT'ed networks requires a proxy of some kind, and there's nothing you already "have installed" that would help you.
Teleconsole is an instant VPN+SSH in one command and it would be wise to educate yourself on what it does (reading just the 1st sentence of the blog post would be a good start) before recommending anything.
Hey HN - This is basically a hosted version of Teleport[0], which may scare some people. We don't store the sessions and you can always self-host if you prefer.
From a quick Re-reading of the teleport readme, and the teleconsole one - I'm a little unclear if there's an easy way to "self-host" a complete teleconsole-setup including the nat-proxy?
I suppose if you have an Internet-facing server to run teleport on, there are fewer reasons to use teleconsole - but I could see it still being useful? (for eg help troubleshooting workstations or servers/clusters behind firewall/Nat for people outside the organisation - volunteering, "helping a friend" or consulting)?
would be nice, but it would likely introduce some additional complexity for instances where there might be more than one live session spawned by a user.
> additional complexity for instances where there might be more than one live session spawned by a user
I'm trying to imagine the situation that makes that worth supporting?
That's not going to be a development server used by an entire team, for example, because whose Github credentials would be on it?
Even so, it could just give a list of session IDs in such a case, and ask which of 1/2/3. The additional complexity is only that the lookup value is a list of strings instead of a single one, surely?
I suppose. Having someone's private key and knowing who's offering the legitimate owner of that key a secure session at that point in time is already a tall order though!
I prefer using tmux for this to be a bit more secure. I wrote a blog post about this [0] since it is a bit of a pain in the ass to set up properly. Maybe you should explore this too for your product.
I suppose this is useful if you have two systems both behind NAT...
I've run into many situations where I'm remote and need to SSH into a NATed machine to fix something, and I typically will SSH reverse tunnel to a VPS.
"just share the public key" doesn't solve firewall, nat, etc network issues that might be a barrier. If you went that route, you'd likely want something like ngrok too.
The only use case I can think of would be helping family with their linux install... and if this is already installed on their machines, it was probably put there by me... just as easy for me to put my ssh pubkeys on there too.
That little command will pull down your GitHub public keys and add them to authorized key file for the user who runs it. Great for setting up new computers. I run it on boot-time for imbedded devices so that I can always access them.
That's cool, but it seems like a lot of code to do something that:
- You can do in one line of bash: curl https://github.com/user.keys >> ~/.ssh/authorized_keys
- The bash one-liner is transparent and educational: it tells you clearly and intuitively where the keys are coming from and where they're going, educating users about the existence of the Github/Gitlab-published keys and about how the authorized_keys file works
You seem to require a lot of code, and lose a lot of very real advantages, for the sake of a bit of brevity. What are the other advantages of this tool?
Executing the curl command with no error checking and blindly appending it to your ~/.ssh/authorized_keys could easily bork the latter. Wrapping it in a fancy command allows for error checking and response validation. Otherwise you could end up the source of https://github.com/503.html in there!
Perhaps, but the simplicity of the curl command provides enough insight into exactly what it does to understand what precisely it does and what could go wrong. I can see what file it's writing to, I can check that file, I can be pretty sure it's not doing anything else weird behind the scenes.
That's the whole point. You have to manually do those things. It's not that you can't do them, it's that if you tell someone "Just run curl ... >> ..." without the validation part they can get hosed. Even in a script you need to be careful about it because a non-200 HTTP response still gives a zero (i.e. success) exit status.
What kind of abuse prevention measures does this have in place? As it stands it looks like a super convenient way to scp garbage into other people's computers and I'm not sure I'm sold on that.
Zero. It's an instant self-configuring SSH server, so the prevention measures are similar as with OpenSSH: if you don't want garbage on your computer, do not add public keys of malicious strangers into your `~/.ssh/authorized_keys` and do not open port 22 to the world! So with Teleconsole, don't invite people you don't trust. :)
You argument is biased and non-factual. Default ssh measures keep un-trusted entities from gaining access, normally. Conversely, once access is granted by the admin to trusted entities, the normal UNIX permissions continue to provide means by which access to the file system is limited by user permissions. Thus, simply by an admin granting access to a system, your (hypothetical) arguments become false and pointless to discuss.
You must be kidding me.
My "argument " is a question. My "hypotetical arguments" consist of me asking if there is some thing stopping people from scp'ing things to my computer then running them.
You talk like phishing and privilege escalation weren't things that exist. Have you ever managed any public-facing service of any importance?
Actually, those types of questions are called leading questions. Making a point based on a hypothetical can not be logically used to make a factual point without it being a bias, even if you phrase it as an "unknown", or question. Your last comment here shows your tendency to blame and use biases for making arguments, which is unfortunate given you appear to be asserting authority on matters of "public facing" servers. You know what they say about making assumptions.
I would note that using biased arguments is an inefficient process in most cases. It's akin to recursion of a process which, in my experience, has brought many a more server to its knees than a hypothetical threat from double authorized access (hash + key).
Another way, using only software that you have already: get your friends' public keys and make a burner account on any jointly-reachable server/laptop/whatever and do a follow-the-leader screen:
Hmm. Seems like all traffic goes through a central proxy server.
Shouldn't it be possible to use the central server only for hole-punching and implement a TCP-over-UDP connection so that the clients can directly communicate with each other? (And don't the major browser vendors already have public NAT-hole-punchers for WebRTC?)
"Hmm. Seems like all traffic goes through a central proxy server."
My first thought.
Also, the server and code may be secure, the weakest point is a person. cf: "You can share the Session URL with a colleague in your organization. Assuming that your colleague has access to teleport.example.com proxy, she will be able to join and help you troubleshoot the problem on "db" in her browser." ~ http://gravitational.com/teleport/docs/quickstart/
If you're claiming that you don't get the ability to audit the code, I'd like to watch you audit a ./configure shell script generated by GNU autoconf.
If you're claiming that you want to apt-get install so the package maintainer has audited the code, I'd like to watch them audit the ./configure shell script.
Downloading and auditing code from an untrusted source is security theatre. Don't install it at all, if you don't trust it. Or use some platform (the web, iOS, Android, Qubes, etc.) that makes it such that there's no need to audit it because the app is restricted in what it can do.
Yes. Wrap the entire script in a function and call the function at the end once the whole thing has transferred. https://install.sandstorm.io/ , for instance, does this.
At least wrap the dang thing in a function and execute the function. That way you don't get partial execution if the HTTP connection manages to die halfway through.
Unless I misunderstood their description, they can man-in-the-middle for the disposable keys use-case anyway. If you don't trust them, don't use the software. I don't really mind having curl|sh installation instructions as long as they use https and the script is written so that truncated downloads don't cause any harm. If you know that this is a risky way of installing software, nothing prevents you from manually verifying the installation script or following the manual installation instructions. Everyone else probably doesn't have the means to properly evaluate the downloaded software anyways.
Can you explain to me the fundamental difference between this and sudo apt-get install?
Note that the curl command is on an https resource. Yes if the https server is compromised there's a problem, but that's true with any other delivery method.
I'm being facetious as the answer is there isn't one
>Yes if the https server is compromised there's a problem, but that's true with any other delivery method.
That's not correct. In most distros, installing packages from your distro's repositories has an additional security guarantee: the packages you download have their PGP signatures verified before installation. If an attacker compromises the web server and alters the package, your package manager will reject it as it's not signed by a trusted key in your keyring.
That is true for distribution-hosted repositories. But how many web sites are there that are like, "Add this line to sources.list, then run this apt-key command, and then run apt-get install our-app." So few people bother to check for a web of trust for the proffered key (or even know how to, or even grok the concept) that there is little functional difference.
You can use SSH keys for VNC, too. Would be neat to see a PoC that allowed you to invite someone to remote control your computer temporarily via their github handle. https://ubuntuforums.org/showthread.php?t=383053
Are there any advantages to using this instead of exposing port 22 temporarily via ngrok advantages adding your friend public key to authorized_keys? (Which has the added advantage of being pure, unmodified SSH)
I don't like using the TCP functionality of ngrok because it seems to always use the same hostname (0.tcp.ngrok.io) with a random port. So anyone can just scan for open ports on that domain and start trying to connect to things. The HTTP forwarding at least gives you a random subdomain.
Note to the author: Please don't have a command line flag that either means ( a valid local posix path ) or ( a username/handle from a specific online service that will be fetched over the network ). *nix cli tools are supposed to be unambiguous. I'd fork/issue/patch it, but I don't have a need to let people ssh into my any of my boxes. Just posting this here because its a valid learning opportunity that making something "easy" is not always the right choice in a cli tool, and that namespaces are important. What if I have a file named the same as a github user? Which thing will work? Will that behavior change unexpectedly? Best to make them different flags.
Publishing the public key that you use to push to github/gitlab is not a big issue... But Re-using your github key-pair, to connect to other unknown and uncontrolled places, _is_ a security issue.
Even re-using your daily system user, for this, is a security issue.
But if you never did read the sshd_config man page, or never did play with its options, maybe you're unaware of this.
I'm not sure if it is a big issue without passing in -A to allow forwarding of authentication to a second server beyond the one you're logging in to. In theory you should still be in control of your secret key and the session even if the first server attempts to proxy or steal your private key. Note that your console/tty or shell might be vulnerable to a malicious server in any case.
Your machine, being an SSH server, gets to decide who can login. The encryption works just with any normal SSH session: between the end user and the server (your machine).
The cloud bridge running on https://teleconsole.com simply acts as a proxy connecting sockets between two users who are behind NAT (without being able to decrypt).
Disclaimer: I'm the author of Teleconsole.
[edit: formatting]
The proxy accepts two incoming connections: from your server and from the client, then it bridges them and allows the client to negotiate an SSH handshake with the server.
It's very easy to get an authorized_keys file for a GitHub user account; just download (e.g.) https://github.com/geofft.keys .
Presumably all this does is spawn a local SSH server with the appropriate authorized_keys file, and forward the TCP connection through the company's servers for NAT traversal. The actual connection is encrypted end-to-end.
Correct. Another thing it does is "merging" the incoming connections into the current bash session, so you can see each other type, the idea here was to work on the same machine together kind of like in Etherpad.
Awesome! I just launched a DevOps consulting company (https://elasticbyte.net), and Teleconsole looks like a great option for interactive SSH sessions with clients.
I would be curious to hear anyones feedback on their experience running Teleport in production. I like what I have read on the site and it certainly has some nice features.
So how does the -i parameter know to look in a local file or on github for the public key? Does it look for a ".pub" in the filename? Feels clunky to me.
Being unconstructively snarky about a concept that another member of this forum has presented is both poor manners and defies the guidelines linked in the bottom of the page
I agree with your opinion, and of course critiques are always welcome here, but your comment does seem unnecessarily snarky. Perhaps something like this would be more in keeping with the HN guidelines:
"It seems like a big risk to allow a little-known cloud service to authorise SSH access to your laptop. What are the use cases for this? What do you need to be careful of?"
If you're going to start your sentence by the heavily condescending "Yeah, what a great idea", then don't pretend you don't understand how that could be against the rules. You can't have your lunch and eat it. Especially if you're not going to provide some facts / hard evidence to back up your assertion that this is, in fact, not a good idea.
The Hacker News community expects civil and substantive comments, as others have already pointed out. Reading this as demanding weasel words is disingenuous.
I use the following incantation when authorizing folks to ssh into my servers via github public keys:
[github name] here should be replaced with github username of your friend or colleague. Really handy because I can just authorize them without a human request/response loop and manual key moving. Simple and no external tools needed. Normal caveats about authorizing people apply.