Hacker News new | past | comments | ask | show | jobs | submit login
Zero-Trust RDP and SSH Access to VMs on Google Cloud (github.com/googlecloudplatform)
185 points by confuseddeputy on Sept 7, 2021 | hide | past | favorite | 88 comments



Inspired by Fly.io's post a while ago[0] I also did something similar to this on my small k8s cluster with the help of stunnel, sslh, and traefik[1].

Weirdly enough I thought this was the ability to provision a wireguard-esque proxy to any machine you want, operated at the edge of the cloud, but it seems like it's really TCP-over-HTTPS.

It's easy to imagine doing the former (dynamic wireguard proxy surfacing) too though -- wireguard sidecar container with shared network namespace with the workload in question + open-to-the-world port somewhere and you'd theoretically have access to any port you wanted on said machine as well. Feels like an easy set up to trust as wireguard is pretty reliable/sound.

[0]: https://fly.io/blog/ssh-and-user-mode-ip-wireguard/

[1]: https://vadosware.io/post/stuffing-both-ssh-and-https-on-por...


I run IAP. It's TCP-over-HTTPS but it works remarkably well, connects to all kinds of things, and for users it really is just "login with google, proceed as normal."

I use a JWT proxy + ghostunnel within GKE with a VIP so it's not quite their reference setup but it's extremely "just works" outside GKE being weird and eating its own routes.

BTW, side-note but try out ghostunnel over stunnel! I've really enjoyed using it and it's been fantastic to debug and work with.


> I run IAP. It's TCP-over-HTTPS but it works remarkably well, connects to all kinds of things, and for users it really is just "login with google, proceed as normal."

Yeah that's really amazing, with client-side software like they've already made and I've seen from other vendors (whether GUI or TUI) the interfaces IaaS/PaaS companies can build are really slick. Looks like they'll be able to cut down on dashboard fatigue/complexity people are wrangling quite a bit.

> I use a JWT proxy + ghostunnel within GKE with a VIP so it's not quite their reference setup but it's extremely "just works" outside GKE being weird and eating its own routes.

Interesting, so JWT proxy (or any other auth mechanism that is viable over HTTPS) -> ghostunnel machine w/ public VIP -> Target machine ? Or ghostunnel directly running on the Target machine which holds the public VIP? Or does the JWT proxy take the public IP and the ghostunnel machine keep the private VIP?

Apologies just want to be able to picture your solution clearly.

> BTW, side-note but try out ghostunnel over stunnel! I've really enjoyed using it and it's been fantastic to debug and work with.

Thanks for the recommendation of ghostunnel, will use it in the future over stunnel next time I hack together something like this.

BTW: super-side note, breath of fire III avatar was a blast from the past, instantly recognized it.


haha i've used garr as an avatar for a super long time now!

> Interesting, so JWT proxy (or any other auth mechanism that is viable over HTTPS) -> ghostunnel machine w/ public VIP -> Target machine ? Or ghostunnel directly running on the Target machine which holds the public VIP? Or does the JWT proxy take the public IP and the ghostunnel machine keep the private VIP?

jwt proxy takes in the iap jwt, they give you the audience and it's just parsed, this lives in the same pod as ghostunnel. ghostunnel goes through a NAT to a public dest, where ghostunnel is _also_ running. it has extremely strict TLS requirements (Forced valid CN to be sent/accepted, strict DNS, along with a single purpose CA, cert, and key).

It's like

IAP =inside-gcp=> (JWT proxy -> ghostunnel) =public=> (ghostunnel -> thing)

There's a more modern way to do this, but this works really well and gives absolutely fantastic introspection, is super easy to use it as a public proxy, and allows you to only use minimal APIs in GCP (IAP + GKE, you don't need GKE but you also don't wanna manage all the things it does for you with annotations. :D)


Ahhh thank you for the detail, I understand the setup now -- I hadn't taken into account the IAP (intra GCP) bit!


I didn't know big tech companies were still capable of making GUI's that look like this.


You mean a UI in which all the clickable elements are obvious, features are discoverable, it’s fairly obvious what you’re interacting with, and keyboard navigation works well? I didn’t know big tech companies could still do this either.


In my opinion, this looks far better and more usable than a lot of new UIs nowadays.

I’d say appearance != usability, and while this might look a bit dated, it probably behaves a lot more like a desktop application than most Electron apps out there.


Indeed, that GUI is awesome.


To me it looks very much like mRemoteNG (https://mremoteng.org/). Is this just because the same WinForms libraries were used or is there some meat to this?


It looks like it's just the same WinForms - they don't seem to share any code.


In my mind that keyboard navigable tree widget is the one redeeming feature of windows. I guess the age of “keyboard first” is long gone :(


Nowadays I'm happy if it even bothers to register all keystrokes. Somehow this has become more difficult now that CPUs are many times faster and don't need to queue all keystrokes to ensure they are taken care of eventually.


Except for tab always being backwards for me. It goes up...


It looks really useful!


It's gorgeous. Like a breath of fresh air after all that electron nightmare.


I don't see the correlation. I get that people don't like electron bloat, but that is orthogonal to UI/UX, no?


The issue with Electron is that devs tries to mimic the browsers UI in a desktop environment. Our desktop OS need to have a desktop UI instead of the mobile webpage with Material/Flat UI design (no visual box line, no separators, no multi-windows, pushing the setting menu as a sidebar, etc). It should be treated as a desktop app with the benefits of desktop, not a mobile app which is the issue.


It looks like this because it's C# and probably using standard UI frameworks instead of hot new buzzword tech or Electron.


it's .net framework 4.X and I really do wonder why...


I can't tell if you think it's a good thing or not. I'm personally impressed; I bet it hardly uses any memory!


you mean: one that is simple, works, represents the underlying model with a tree, and runs on a desktop?


It’s based on visual studio.


It's not. This is a regular .NET Windows Forms app and the docking/tabbing stuff is DockPanelSuite, I'd recognize it a mile away. This is confirmed by poking around in their .csproj.


It's not. It's using WinForms with the DockPanel Suite library, with a VS2015 theme.


This sounds like a deal breaker for some use cases: https://github.com/GoogleCloudPlatform/iap-desktop/wiki/Trou... "Because of the way IAP Desktop tunnels RDP connections, it always uses NTLM for authentication and can't use Kerberos." There may be environments that lose the security benefits of Kerberos over NTLMv2 (e.g., mutual authentication) because they've been forced into a new compliance mandate that dictates adoption of Zero Trust in all available contexts.


Looks like Microsoft's decision to go with increasingly elaborate challenge-response schemes instead of properly encrypting the whole connection (like SSL/SSH) will be haunting us for a while yet.

I don't understand why RDP/SMB/... with plaintext auth over SSL hasn't been a thing for at least a decade, does Microsoft just not care about transport security?


Isn’t it plausible that an interactive gui over ssl didn’t perform well specially for VMs or the affect of the renegotiation


RDP over SSH already performs very well, so any in-protocol implementation would only be faster (less overhead).


I make use of IAP and OS Login today, to log in to a Compute Engine Linux VM. The VM has Internet access via NAT, and has no public IP.

Logging in is via `gcloud compute ssh`. Authenticating `gcloud` involves a corporate login which uses a client certificate and two-step.

For all the components involved, it works pretty well!


You can also add it to your ~/.ssh/config, so you can just ssh hostname, scp hostname, etc. without a public IP on the VM.

  Host myhost
       ProxyCommand gcloud compute ssh user@myhost --zone=myzone --tunnel-through-iap --command="nc 0.0.0.0 22" -- -o "UserKnownHostsFile /dev/null" -o "StrictHostKeyChecking no"


> ... which uses a client certificate ...

Can you encrypt the client private key on disk and use sth like ssh-agent?


Yes, simply add the key gcloud generates as normal using ssh-add ~/.ssh/google_compute_engine


No, I meant the client private key that gcloud uses to authenticate itself (on your behalf) to Google's servers, not you to your servers. That wouldn't be an SSH key, probably TLS or hand-rolled crypto.

----

Also, now that you mention it, even if I encrypted the generated SSH key, wouldn't running a `gcloud ...` command again just ... re-generate the key, in unencrypted form?


Sorry, I should clarify: The client key is used in our corporate login.

When I log in to `gcloud`, that goes through our corporate login. Corporate login uses a client certificate and two-step.


How much work would it be to make this general purpose? To not only work for Google-cloud...


Checkout this guide I published today. It walks through the code to do the secure tunneling part in ~20 lines of Rust, using Ockam a library to create end-to-end encrypted secure channels

https://github.com/ockam-network/ockam/tree/develop/document...


Here open source, general purpose version for access to any cloud or resource in fact. This is specifically for SSH but can support any protocol or app with the vanilla SDKs and tunnelers - https://ziti.dev/blog/zitifying-ssh/


> IAP Desktop is a Windows application that allows you to manage multiple Remote Desktop and SSH connections to VM instances that run on Google Cloud.

Is there a Linux client too?


Would be nice to have a Mac version of that.


See PLG88 comment above, this can work for MAC - https://apps.apple.com/app/id1460484572


> IAP Desktop is an open-source project and not an officially supported Google product.


I'm sorry if this sounds completely ridiculous to some people, but what do people use RDP/Windows server for in 2021? Given that ASP/Dotnet is portable to linux, what are people building that isn't better deployed to linux? It can't just be the legacy use-case, can it?


RDP is pretty easy to explain: if you need a GUI, RDP is infinitely smoother than VNC or anything else the Linux ecosystem has to offer. I even use it on Linux for things like livestreaming (OBS running in a minimal GUI like openbox). The are many similar workloads that are less "servers" and more "cloud workstations" that use GUI apps.

As for Windows in general, 90% of the Windows servers I see fall into one of the 2 categories: 1) they run some Windows-only software or 2) the sysadmins at the company know Windows, so that's what they use. If your company relies on something from 1), you're very likely also going to fall into 2).

Specific example: a company wanted to allow any employee to log in from any computer and be ablr to get work done. A VM terminal server was too heavy for their network and server budget and they already had decently powerful PCs, so they went with AD and that roaming accouts thing. They had to hire a Windows server sysadmin to manage it.

When they decided they also needed a good NAS, web server and backup system, they already had a Windows sysadmin on staff and the licences+over-spec required to run in on Win were still cheaper than hiring another sysadmin.


> RDP is pretty easy to explain: if you need a GUI, RDP is infinitely smoother than VNC or anything else the Linux ecosystem has to offer.

The last time I was assessing Windows remote * for performance, VNC* implementations with a mirror driver provided far better performance than vanilla RDP.


I have the complete opposite experience, but I only used RealVNC (not sure what mirror driver is). My experience is that the Windows RDP client and RDP server on any platfrom is >>> VNC in every way.


That was sort of my company's IT story.

In the beginning they needed AD because business is heavily Windows centric.

Then they started using Windows Server for DNS, business apps, shared volumes, etc...

After I joined I started separating concerns / reducing blast radius and now we use Windows Server for AD and a few apps that are Windows only.

Azure AD + Intune MDM is getting better and I think we will be able to kill our managed AD soonish though.


meshcentral is pretty nice open source software for remote access multi platform:

https://github.com/Ylianst/MeshCentral

It has a free instance here

https://meshcentral.com/info/


We use remote scripted Indesign instances for creating catalogues, which needs Windows to run, and RDP for debugging.

I'd love to hear about alternative solutions that don't require Windows, and still spit out colour-accurate CMYK indesign files; the printing shops won't accept anything else.


The most frustrating, and most fun moments I've had hacking on a computer has been scripting InDesign, so props for that!

Most printing shops I've had to treat with will also accept PDFs, is that the case? Because then, there's a case to be made for converting your templates to HTML (gives you access to a huge amount of tooling) and printing them to a PDF through a headless browser instance (like pptr.dev). That has worked for me in the past, and is accessible and good quality (though I did not need accurate CMYK, so YMMV).


Accurate CMYK is the crux here unfortunately; we've got a headless browser PDF setup already working for anything that's fine with "sRGB, probably?" in terms of colour accuracy, but there's a bunch of large companies with stricter requirements (and the budgets for it, thankfully).

(Though browser-based PDF isn't a panacea either. We've got one customer whose 250 page technical manual needs between 40 and 70 minutes to generate through paged.js and Chrome… They're fine with running it as an overnight job, but it's still painful to watch.)


Yeah, I get it, that sucks :/ I do know that PrinceXML has CMYK support so maybe check it out. Else you'd always have FrameMaker and the Server edition if you wanna go that much enterprise-y.


The last time I tried something like this, I quickly quit and used LaTeX. For all of its warts, it’s an excellent tool and it runs very quickly compared to a browser.


Jeez. Reminded of scripting quark with apple script. And photoshop cmyk tiff separations, I don't recall, but the workflow was scripted. I frankly can't imagine what that looks like today.


This is exactly the sort of use case I wasn't imagining. Thanks for sharing.


Take a look at open source Ziti, it will work on any system, app or cloud - https://ziti.dev/ or https://github.com/openziti


Not sure if this is link spam or if you just didn't understand my question.


I understood you would like an "alternative solution" to dozero trust RDP and SSH "that don't require Windows"


I believe (s)he meant an alternative to the InDesign rendering workflow that runs on Windows.


Yes.


I use Windows VMs on Google Cloud to install and play games in the cloud - my Mac can’t play any


Parsec is outstanding for this use case, it’s basically a roll-your-own Stadia. Dunno what its future will be now that it was acquired by Unity


Yep, usually use Parsec, or Moonlight + ZeroTier for this.


Nvidia GeForce Now offers this for free, 1 hour at a time.

It barely worked in their web client but the desktop client worked pretty well.


For me GeForce NOW is the best service, but it really sucks that I’m not allowed to play my whole Steam library. That’s a disqualified for me


hows the performance/input lag on that? How much are you paying & is it worth?


It really depends on how far away from the data center you are. I used to live close to Amsterdam and that felt basically local, even with a not-amazing internet connection. Right now I live in the Valencia (Spain), with a local and not very well paired ISP. There is lag, but it is just about acceptable for platformers and action/adventure games. I wouldn't recommend it for shooters.

The cost ended up being around 1€/hr when the instance was on, and 0.06€/day for the storage. I am sure this could be optimized (e.g. use snapshots and spot instance) - but I just couldn't be bothered to. For me it is worth it.


This is a very novel use case to me. Could you elaborate why use this solution instead of a local VM, boot camp, or something like Stadia?

Seems that it would work quite well for turn based games (including auto save)


I don't use a local VM or bootcamp because I just don't have a powerful enough computer to run AAA games from the last 5 years or so - even earlier if the game is badly optimized. Now with an M1 they're not even compatible.

Stadia, GeForce NOW, Xbox Game Streaming and PlayStation Now: I've tried them all, and GeForce NOW specifically is very good. But, they only allow me to play a subset of my large pre-existing game library, and/or make me buy games again, which I'm not interested in.

Stadia is specially bad at this, having to purchase on a game-by-game basis, not being compatible with any existing ecosystem (Steam, PS, Xbox have cloud saves and friend lists), and then trusting Google to run my game forever on their servers after paying a one-time cost.


In a large scale enterprise environment, say 100k+ seats with a mix of MS Office, scientific/engineering use-cases, Windows, Linux & OSX, Windows is still the way to go for the forest, tying it in with JAMF and Linux integrations.

Less the legacy use-case than how do we integrate everything with everything.


.NET Core might be portable, however it is a tiny subset of what .NET Framework is capable of.

Additionally lots of enterprise shops, despite what HN crowd thinks, are mostly on Microsoft stacks, so that .NET application is going to connect to other APIs and stuff not available on GNU/Linux.

Then, many companies use VMs as desktops for contractors, you are just not allowed to plug anything on their network, so you are using Visual Studio via RDP/Citrix.

As an example, in life sciences most of the laboratory hardware only has APIs available via COM/.NET Framework.


I run many GUI apps remotely. I don't waste time porting stuff that works best on Windows to Linux (I have two machines). Not all use cases are development.


The answer is one (or most likely all of) these: active directory, sharepoint, exchange


Remote browser isolation for companies that need Microsoft.


Custom Microsoft Dynamics install?


Amazing. Obviously need audit. But so far so good.


Is this similar to AWS workspaces ?


Why are we still building tools to hand manage VMs in 2021? Am I missing something or is this for raising pets instead of heading cattle[0]?

0. http://cloudscaling.com/blog/cloud-computing/the-history-of-...


Because when your herd of cattle is sick, you need to grab one and have a vet look at it before your whole herd dies from the plague.


No, part of managing a heard is having the right tools in place. Like monitoring, logging, and observability tools.

There is nothing I can learn from accessing a VM in production that I can't learn from my monitoring system.

In prod where I work, if someone logs into a production VM we mark it tainted and replace it with a fresh instance. This keeps things nice and consistent.

Of you need an interactive session on a prod machine you are missing tools.


> There is nothing I can learn from accessing a VM in production that I can't learn from my monitoring system.

Other than how to fix gaps and other problems with your monitoring? As you get experience, you’ll learn this is done like a garden — you can heavily reduce your need for interactive sessions but it never goes to zero.

I think you’re making the classic mistake of treating a guideline as more of a religious mandate. Yes, it’s good to have servers be easily replaced but that desire does not magically rewrite all existing software or retrain every IT worker.

Similarly, automation is great but you need to develop and maintain it - which almost always involves interactive work. The taint process you mentioned is a popular way to balance those needs long-term.

Finally, if you are thinking of “server” as only a production-hardened network service you’re missing out on a lot of other things enterprises use cloud services for, such as developer workstations or general virtual desktops. Many places heavily expanded that over the last year because you avoid the security concerns about having your data on easily lost/stolen laptops and can avoid turning your VPN into a massive bottleneck for the entire company.


The overhead of ideal levels of automation, like

> In prod where I work, if someone logs into a production VM we mark it tainted and replace it with a fresh instance. This keeps things nice and consistent.

doesn't make sense from an ROI perspective, at a great number of businesses. Like, "this would take a decade to pay off, and that's assuming it requires no maintenance" kind of bad ROI.

Lots of places, you script vm/server configs (even just with bash) and get CI running automated tests on important branches, and you've captured 99% of the benefit available from automation. Would the other stuff be nice? Yes, but five people saving 15 minutes per week means you can't reasonably spend the kind of time on it—for initial set-up and for ongoing maintenance—that you would if it were fifty people saving 15 minutes per week, let alone 500 (at that point you can have a couple people dedicated full-time to just that one piece of automation, and it's still saving you money).


Plugging all our cattle into a heart- and bloodpressure monitor and doing frequent blood draws from every cow "just in case" is wasteful and unnecessary. There is a balance between sensible general always-available monitoring and special-case-debugging a problem.

My rule for that is: more than once a year or more than 6h? Automate and tool it. Less? SSH or other special-case tools are fine.


What about cloud based developer VMs?


Farmers keep both sheep dogs and sheep...


Because not everyone is on the latest fashion.


In reading through this on the surface, it appears as though there is a mix of trust relationships that pre-exist, and credential issuances that occur on the fly. Also, it also appears there is no privilege tiering aka, enterprise access model, applied to the example. Did I see this wrong?

I'd be interested in seeing what credentials in toto are there, and which ones are ephemeral, and susceptibility to lateral traversal.

Could you respond on the merits of the critique?


For Linux systems at least, IAP doesn’t deal with privilege tiering. Instead, OS Login handles mapping a user’s Google account to a local account. There is also a program that queries a user’s SSH key from OS Login, and passes it to sshd when asked.

OS Login defines two IAM roles, one for “Can I log in?” and one for “Can I sudo?”. Those are implemented on the system via PAM, so you can add whatever additional restrictions you’d like.

Fetching of user information via OS Login is implemented via a NSS module. POSIX attributes can be customized via the Google Directory API. And I believe Google Groups can be mapped to POSIX supplemental groups, but I’m not certain.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: