Inspired by Fly.io's post a while ago[0] I also did something similar to this on my small k8s cluster with the help of stunnel, sslh, and traefik[1].
Weirdly enough I thought this was the ability to provision a wireguard-esque proxy to any machine you want, operated at the edge of the cloud, but it seems like it's really TCP-over-HTTPS.
It's easy to imagine doing the former (dynamic wireguard proxy surfacing) too though -- wireguard sidecar container with shared network namespace with the workload in question + open-to-the-world port somewhere and you'd theoretically have access to any port you wanted on said machine as well. Feels like an easy set up to trust as wireguard is pretty reliable/sound.
I run IAP. It's TCP-over-HTTPS but it works remarkably well, connects to all kinds of things, and for users it really is just "login with google, proceed as normal."
I use a JWT proxy + ghostunnel within GKE with a VIP so it's not quite their reference setup but it's extremely "just works" outside GKE being weird and eating its own routes.
BTW, side-note but try out ghostunnel over stunnel! I've really enjoyed using it and it's been fantastic to debug and work with.
> I run IAP. It's TCP-over-HTTPS but it works remarkably well, connects to all kinds of things, and for users it really is just "login with google, proceed as normal."
Yeah that's really amazing, with client-side software like they've already made and I've seen from other vendors (whether GUI or TUI) the interfaces IaaS/PaaS companies can build are really slick. Looks like they'll be able to cut down on dashboard fatigue/complexity people are wrangling quite a bit.
> I use a JWT proxy + ghostunnel within GKE with a VIP so it's not quite their reference setup but it's extremely "just works" outside GKE being weird and eating its own routes.
Interesting, so JWT proxy (or any other auth mechanism that is viable over HTTPS) -> ghostunnel machine w/ public VIP -> Target machine ? Or ghostunnel directly running on the Target machine which holds the public VIP? Or does the JWT proxy take the public IP and the ghostunnel machine keep the private VIP?
Apologies just want to be able to picture your solution clearly.
> BTW, side-note but try out ghostunnel over stunnel! I've really enjoyed using it and it's been fantastic to debug and work with.
Thanks for the recommendation of ghostunnel, will use it in the future over stunnel next time I hack together something like this.
BTW: super-side note, breath of fire III avatar was a blast from the past, instantly recognized it.
haha i've used garr as an avatar for a super long time now!
> Interesting, so JWT proxy (or any other auth mechanism that is viable over HTTPS) -> ghostunnel machine w/ public VIP -> Target machine ? Or ghostunnel directly running on the Target machine which holds the public VIP? Or does the JWT proxy take the public IP and the ghostunnel machine keep the private VIP?
jwt proxy takes in the iap jwt, they give you the audience and it's just parsed, this lives in the same pod as ghostunnel. ghostunnel goes through a NAT to a public dest, where ghostunnel is _also_ running. it has extremely strict TLS requirements (Forced valid CN to be sent/accepted, strict DNS, along with a single purpose CA, cert, and key).
There's a more modern way to do this, but this works really well and gives absolutely fantastic introspection, is super easy to use it as a public proxy, and allows you to only use minimal APIs in GCP (IAP + GKE, you don't need GKE but you also don't wanna manage all the things it does for you with annotations. :D)
You mean a UI in which all the clickable elements are obvious, features are discoverable, it’s fairly obvious what you’re interacting with, and keyboard navigation works well? I didn’t know big tech companies could still do this either.
In my opinion, this looks far better and more usable than a lot of new UIs nowadays.
I’d say appearance != usability, and while this might look a bit dated, it probably behaves a lot more like a desktop application than most Electron apps out there.
To me it looks very much like mRemoteNG (https://mremoteng.org/). Is this just because the same WinForms libraries were used or is there some meat to this?
Nowadays I'm happy if it even bothers to register all keystrokes. Somehow this has become more difficult now that CPUs are many times faster and don't need to queue all keystrokes to ensure they are taken care of eventually.
The issue with Electron is that devs tries to mimic the browsers UI in a desktop environment. Our desktop OS need to have a desktop UI instead of the mobile webpage with Material/Flat UI design (no visual box line, no separators, no multi-windows, pushing the setting menu as a sidebar, etc). It should be treated as a desktop app with the benefits of desktop, not a mobile app which is the issue.
It's not. This is a regular .NET Windows Forms app and the docking/tabbing stuff is DockPanelSuite, I'd recognize it a mile away. This is confirmed by poking around in their .csproj.
This sounds like a deal breaker for some use cases: https://github.com/GoogleCloudPlatform/iap-desktop/wiki/Trou... "Because of the way IAP Desktop tunnels RDP connections, it always uses NTLM for authentication and can't use Kerberos." There may be environments that lose the security benefits of Kerberos over NTLMv2 (e.g., mutual authentication) because they've been forced into a new compliance mandate that dictates adoption of Zero Trust in all available contexts.
Looks like Microsoft's decision to go with increasingly elaborate challenge-response schemes instead of properly encrypting the whole connection (like SSL/SSH) will be haunting us for a while yet.
I don't understand why RDP/SMB/... with plaintext auth over SSL hasn't been a thing for at least a decade, does Microsoft just not care about transport security?
No, I meant the client private key that gcloud uses to authenticate itself (on your behalf) to Google's servers, not you to your servers. That wouldn't be an SSH key, probably TLS or hand-rolled crypto.
----
Also, now that you mention it, even if I encrypted the generated SSH key, wouldn't running a `gcloud ...` command again just ... re-generate the key, in unencrypted form?
Checkout this guide I published today. It walks through the code to do the secure tunneling part in ~20 lines of Rust, using Ockam a library to create end-to-end encrypted secure channels
Here open source, general purpose version for access to any cloud or resource in fact. This is specifically for SSH but can support any protocol or app with the vanilla SDKs and tunnelers - https://ziti.dev/blog/zitifying-ssh/
I'm sorry if this sounds completely ridiculous to some people, but what do people use RDP/Windows server for in 2021? Given that ASP/Dotnet is portable to linux, what are people building that isn't better deployed to linux? It can't just be the legacy use-case, can it?
RDP is pretty easy to explain: if you need a GUI, RDP is infinitely smoother than VNC or anything else the Linux ecosystem has to offer. I even use it on Linux for things like livestreaming (OBS running in a minimal GUI like openbox). The are many similar workloads that are less "servers" and more "cloud workstations" that use GUI apps.
As for Windows in general, 90% of the Windows servers I see fall into one of the 2 categories: 1) they run some Windows-only software or 2) the sysadmins at the company know Windows, so that's what they use. If your company relies on something from 1), you're very likely also going to fall into 2).
Specific example: a company wanted to allow any employee to log in from any computer and be ablr to get work done. A VM terminal server was too heavy for their network and server budget and they already had decently powerful PCs, so they went with AD and that roaming accouts thing. They had to hire a Windows server sysadmin to manage it.
When they decided they also needed a good NAS, web server and backup system, they already had a Windows sysadmin on staff and the licences+over-spec required to run in on Win were still cheaper than hiring another sysadmin.
> RDP is pretty easy to explain: if you need a GUI, RDP is infinitely smoother than VNC or anything else the Linux ecosystem has to offer.
The last time I was assessing Windows remote * for performance, VNC* implementations with a mirror driver provided far better performance than vanilla RDP.
I have the complete opposite experience, but I only used RealVNC (not sure what mirror driver is). My experience is that the Windows RDP client and RDP server on any platfrom is >>> VNC in every way.
We use remote scripted Indesign instances for creating catalogues, which needs Windows to run, and RDP for debugging.
I'd love to hear about alternative solutions that don't require Windows, and still spit out colour-accurate CMYK indesign files; the printing shops won't accept anything else.
The most frustrating, and most fun moments I've had hacking on a computer has been scripting InDesign, so props for that!
Most printing shops I've had to treat with will also accept PDFs, is that the case? Because then, there's a case to be made for converting your templates to HTML (gives you access to a huge amount of tooling) and printing them to a PDF through a headless browser instance (like pptr.dev). That has worked for me in the past, and is accessible and good quality (though I did not need accurate CMYK, so YMMV).
Accurate CMYK is the crux here unfortunately; we've got a headless browser PDF setup already working for anything that's fine with "sRGB, probably?" in terms of colour accuracy, but there's a bunch of large companies with stricter requirements (and the budgets for it, thankfully).
(Though browser-based PDF isn't a panacea either. We've got one customer whose 250 page technical manual needs between 40 and 70 minutes to generate through paged.js and Chrome… They're fine with running it as an overnight job, but it's still painful to watch.)
Yeah, I get it, that sucks :/ I do know that PrinceXML has CMYK support so maybe check it out. Else you'd always have FrameMaker and the Server edition if you wanna go that much enterprise-y.
The last time I tried something like this, I quickly quit and used LaTeX. For all of its warts, it’s an excellent tool and it runs very quickly compared to a browser.
Jeez. Reminded of scripting quark with apple script. And photoshop cmyk tiff separations, I don't recall, but the workflow was scripted. I frankly can't imagine what that looks like today.
It really depends on how far away from the data center you are. I used to live close to Amsterdam and that felt basically local, even with a not-amazing internet connection. Right now I live in the Valencia (Spain), with a local and not very well paired ISP. There is lag, but it is just about acceptable for platformers and action/adventure games. I wouldn't recommend it for shooters.
The cost ended up being around 1€/hr when the instance was on, and 0.06€/day for the storage. I am sure this could be optimized (e.g. use snapshots and spot instance) - but I just couldn't be bothered to. For me it is worth it.
I don't use a local VM or bootcamp because I just don't have a powerful enough computer to run AAA games from the last 5 years or so - even earlier if the game is badly optimized. Now with an M1 they're not even compatible.
Stadia, GeForce NOW, Xbox Game Streaming and PlayStation Now: I've tried them all, and GeForce NOW specifically is very good. But, they only allow me to play a subset of my large pre-existing game library, and/or make me buy games again, which I'm not interested in.
Stadia is specially bad at this, having to purchase on a game-by-game basis, not being compatible with any existing ecosystem (Steam, PS, Xbox have cloud saves and friend lists), and then trusting Google to run my game forever on their servers after paying a one-time cost.
In a large scale enterprise environment, say 100k+ seats with a mix of MS Office, scientific/engineering use-cases, Windows, Linux & OSX, Windows is still the way to go for the forest, tying it in with JAMF and Linux integrations.
Less the legacy use-case than how do we integrate everything with everything.
.NET Core might be portable, however it is a tiny subset of what .NET Framework is capable of.
Additionally lots of enterprise shops, despite what HN crowd thinks, are mostly on Microsoft stacks, so that .NET application is going to connect to other APIs and stuff not available on GNU/Linux.
Then, many companies use VMs as desktops for contractors, you are just not allowed to plug anything on their network, so you are using Visual Studio via RDP/Citrix.
As an example, in life sciences most of the laboratory hardware only has APIs available via COM/.NET Framework.
I run many GUI apps remotely. I don't waste time porting stuff that works best on Windows to Linux (I have two machines). Not all use cases are development.
No, part of managing a heard is having the right tools in place. Like monitoring, logging, and observability tools.
There is nothing I can learn from accessing a VM in production that I can't learn from my monitoring system.
In prod where I work, if someone logs into a production VM we mark it tainted and replace it with a fresh instance. This keeps things nice and consistent.
Of you need an interactive session on a prod machine you are missing tools.
> There is nothing I can learn from accessing a VM in production that I can't learn from my monitoring system.
Other than how to fix gaps and other problems with your monitoring? As you get experience, you’ll learn this is done like a garden — you can heavily reduce your need for interactive sessions but it never goes to zero.
I think you’re making the classic mistake of treating a guideline as more of a religious mandate. Yes, it’s good to have servers be easily replaced but that desire does not magically rewrite all existing software or retrain every IT worker.
Similarly, automation is great but you need to develop and maintain it - which almost always involves interactive work. The taint process you mentioned is a popular way to balance those needs long-term.
Finally, if you are thinking of “server” as only a production-hardened network service you’re missing out on a lot of other things enterprises use cloud services for, such as developer workstations or general virtual desktops. Many places heavily expanded that over the last year because you avoid the security concerns about having your data on easily lost/stolen laptops and can avoid turning your VPN into a massive bottleneck for the entire company.
> In prod where I work, if someone logs into a production VM we mark it tainted and replace it with a fresh instance. This keeps things nice and consistent.
doesn't make sense from an ROI perspective, at a great number of businesses. Like, "this would take a decade to pay off, and that's assuming it requires no maintenance" kind of bad ROI.
Lots of places, you script vm/server configs (even just with bash) and get CI running automated tests on important branches, and you've captured 99% of the benefit available from automation. Would the other stuff be nice? Yes, but five people saving 15 minutes per week means you can't reasonably spend the kind of time on it—for initial set-up and for ongoing maintenance—that you would if it were fifty people saving 15 minutes per week, let alone 500 (at that point you can have a couple people dedicated full-time to just that one piece of automation, and it's still saving you money).
Plugging all our cattle into a heart- and bloodpressure monitor and doing frequent blood draws from every cow "just in case" is wasteful and unnecessary. There is a balance between sensible general always-available monitoring and special-case-debugging a problem.
My rule for that is: more than once a year or more than 6h? Automate and tool it. Less? SSH or other special-case tools are fine.
In reading through this on the surface, it appears as though there is a mix of trust relationships that pre-exist, and credential issuances that occur on the fly. Also, it also appears there is no privilege tiering aka, enterprise access model, applied to the example. Did I see this wrong?
I'd be interested in seeing what credentials in toto are there, and which ones are ephemeral, and susceptibility to lateral traversal.
For Linux systems at least, IAP doesn’t deal with privilege tiering. Instead, OS Login handles mapping a user’s Google account to a local account. There is also a program that queries a user’s SSH key from OS Login, and passes it to sshd when asked.
OS Login defines two IAM roles, one for “Can I log in?” and one for “Can I sudo?”. Those are implemented on the system via PAM, so you can add whatever additional restrictions you’d like.
Fetching of user information via OS Login is implemented via a NSS module. POSIX attributes can be customized via the Google Directory API. And I believe Google Groups can be mapped to POSIX supplemental groups, but I’m not certain.
Weirdly enough I thought this was the ability to provision a wireguard-esque proxy to any machine you want, operated at the edge of the cloud, but it seems like it's really TCP-over-HTTPS.
It's easy to imagine doing the former (dynamic wireguard proxy surfacing) too though -- wireguard sidecar container with shared network namespace with the workload in question + open-to-the-world port somewhere and you'd theoretically have access to any port you wanted on said machine as well. Feels like an easy set up to trust as wireguard is pretty reliable/sound.
[0]: https://fly.io/blog/ssh-and-user-mode-ip-wireguard/
[1]: https://vadosware.io/post/stuffing-both-ssh-and-https-on-por...