Hacker News new | past | comments | ask | show | jobs | submit login
SSH over HTTPS (trofi.github.io)
426 points by jandeboevrie 8 months ago | hide | past | favorite | 129 comments



From the article:

> Ubiquitous presence of HTTPS allows you to pass your data through very restrictive middle boxes!

This is, in fact, why all — or nearly all — proprietary VPN protocols (so-called "SSL VPNs") implement a mode that initiates a tunnel via HTTPS, at least as a fallback if not as the primary mode of operation: precisely in order to have a mode of operation that works with almost any connection to the global Internet.

I'm one of the main developers of https://gitlab.com/openconnect/openconnect, which implements many such protocols, and wrote https://github.com/dlenski/what-vpn, which sniffs or identifies even more flavors of TLS-based VPN servers.


It is tragic that we can't seem to use ports for their intended purposes because of crude attempts to limit the utility of a connection.


Exactly this. Continued ossification of the stack. We've practically given up on introducing new protocols on top of IP decades ago because parts of the internet drop everything that's not UDP or TCP. And because NAT. Now we're at the next step where we design protocols not on top of UDP/TCP but on top of HTTPS. You'd think that thanks to TLS this will be the final step but I guess we should never underestimate the idiocy of mankind.


Thankfully with Encrypted Client Hello (formerly eSNI) we should be pretty future-proofed with middleboxes losing more information on where a HTTPS connection is going. The only thing left is the IP address, which is only a guess and is mostly useless if it's going to a CDN - effectively making all sites unblockable if they use akamai/CF/etc.


>...if they use akamai/CF/etc

Well there you go. We went down to 1 port, then down to one protocol, now we'll soon be down to 1 company (akamai/CF/etc).


With how many use big CDN providers, I imagine most network admins won't even bother trying to whack-a-mole ban IPs to try to block content - especially since most objectionable content, or at least content specifically made to evade filters, will be behind the CDNs anyways.


How far away do you think we are in terms of having ECH support in OpenSSL, all the popular web browsers and proxies, etc...?


On the other hand HTTPS provides private reliable bidirectional multiplexed communication ... exactly the same as TCP. As long as the initial handshake is https on port 443, you can tunnel tcp through any https proxy. They can't stop it, and if you use 2 layers of encryption, they can't tell what you're doing.

AND the client can run in javascript (or wasm these days)

And nobody will let companies turn that off, because that would mean ad networks would lose out. Which means none of the free services on the internet would work over such a connection, which means companies would either have to pay to implement their own services or do without those services. Both are inconceivable to managers, so ...


I don’t think carriers being greedy monopolists is the idiocy of mankind; more like we are idiots for allowing regulatory capture to not only permit but encourage them in this sort of abusive rentseeking.


I think it's more expected than tragic. Old protocols leaked metadata all over the place because their designers isn't know any better. Every single packet used to be tagged with each app's number making it so convenient for every network in between to fuck with your data. Modern protocols reveal nothing until the TLS handshake is complete


ALPN (if set), SNI (ESNI still a draft), IP (unavoidable)


With modern software like Chrome, Caddy, etc, ALPN should always be set to h3 and ESNI should always be enabled. Doesn't matter if it's a draft. I don't care about leaking my IP. Most people don't


Only if you use h3. There are other ALPN values.


Why do I care? All the clients I use send the same ALPN values and most of the servers I talk to just use h3


Ask yourself - I don't know why you are commenting here. H3 only is rare.


We can though. Look at games, lots of custom protocols and custom ports. They still work. Shitty corporate networks are not the norm.


I could have qualified "can't" with "reliably" or "consistently", I suppose.


Back in the day, XS4ALL, a Dutch internet provider had exactly this feature. They provided ssh access via port 80. It saved me a couple of time while I was traveling and the only way to get internet access was via hotel WiFi, which blocked everything except port 80. If anybody from XS4ALL is reading this... Thanks!


XS4ALL was amazing and it’s a genuine shame that KPN corporate decided to dissolve the brand. But I guess, KPN wouldn’t have been comfortable with XS4ALL’s hacker ethos anyways…


Fellow xs4all user here, it was fantastic, the real spirit of the early internet.

Sort of a redo of the pirate radio ethos of the 60s.

https://en.wikipedia.org/wiki/Pirate_radio_in_Europe


Among the non-standard ports for SSH, 443 is in the top ports used:

https://www.shodan.io/search/facet?query=ssh&facet=port https://www.shodan.io/search/facet.png?query=ssh&facet=port

Port 80 is a lot less common though.


I didn't realize they were a full on ISP! I recall using them back in the day as a newsgroup provider.


XS4ALL sort of lives on in the form of Freedom - https://freedom.nl/en


  had exactly this feature. They provided ssh access via port 80.
OP is describing something different:

- different port (443, not 80)

- different protocol used on that port (https, not ssh)


It seems the same to me: using a port that's open for a commonly used protocol, so http (80) in the 90s, https (443) now. Of course the protocol is different, that's the point!


It's not the same at all. OP's port 443 is not 'open' in the same sense that GGP's port 80 was 'open'.

In the old days, only the port number mattered. Today, DPI means the protocol matters as well.


The SSL negotiation part happens before any other communication. Once the encrypted connection is established, how do you analize the protocol?

Edit: I tested that time ago:

https://news.ycombinator.com/item?id=38753897

And to save roundtrips: I believe it must be possible to analyze encrypted traffic to find out which protocol is used. But I doubt that the hospital admins are so motivated or sophisticated.


> The SSL negotiation part happens before any other communication.

An SSH server and client do not use SSL/TLS to set up the connection. They use the SSH protocol.

As soon as you connect to an SSH server, the server sends an identification string. The identification string always starts with:

  SSH-
It's trivial to detect.

In the old days, corporate firewall rules were based solely on port numbers. So you could connect to an outside SSH server running on port 80, even if port 22 was blocked. Nowadays, an SSH server running on any port (80, 443, or any other) can easily be detected and blocked.


OK, I believe you, but then, does the trick described in the article work?

I ask because if it works, the principle is the same: using a commonly used protocol to circumvent limitations. It used to be easier to do then, it's more involved now.

In other words: is it possible to tunnel anything through https?


> the principle is the same: using a commonly used protocol to circumvent limitations

No it's not. The earlier method used only a commonly used port, and did not require the use of a commonly used protocol.


The purpose of using the TLS layer is to prevent the DPI.


Dpi has been around for a very long time.


Yes, but I'm specifically talking about a time when many corporate networks weren't yet using DPI.


Many of them still aren’t. Case in point - the firewall from the original post.


OP describes tunneling SSH within another protocol. In the absence of DPI, this wouldn't be required.


There is often also option 3, put ssh on port 80 and/or 443 on a different host and ProxyJump to the intended destination (and/or use SOCKS to that host to generally get a less filtered internet connection). I use SOCKS and also forward DNS over TLS over the ssh connection via port forwarding.

At least several years ago when I first set up my SOCKS proxy and was using wifi quite a bit I never found one that did anything more than check the port, although I have heard they exist and could be more common now (and of course it doesn't matter how common they are if one is in your way).


Back in my freshman year of high school, I was just starting to get into self-hosting. As it turns out, the school blocked websites, but did absolutely nothing about ports. So of course, I just SSH'd over to my server and carried on as normal.

Later, I was working on making an archive of Windows .iso files, and since I had some free time, I was downloading them on my laptop and then uploading them to my server with scp. As it turns out, using dozens of gigabytes, in both upload and download, on a port besides 80 and 443, is enough to finally get your traffic inspected, so around lunchtime IT finally blocked port 22. But you know what they didn't block? Every other port! So I just moved SSH to port 443 in my port forwarding and carried on as normal.

A long time later, sometime during sophomore year IIRC, the school's IT noticed me SSH-ing over port 443 and put an end to that. They set up some basic traffic analysis to block SSH on ports 80 and 443. But you know what they didn't block? Every other port!

Eventually they just ended up blocking my server at the IP level (the IP of my domain), but of course, but you know what they didnt block? Literally every other IP!

I could get around it by just ProxyJump-ing with a VPS, but being an early college high school student, after sophomore year I rarely go to the high school, so it's not really worth the effort. But next time I do go, I'll do it, just to prove I can.

If they finally block SSH on all ports, then I can just set up SSH over HTTPS on the VPS, of course. There's still more they can do, of course, but I'll come back after I graduate and see what I can do on their guest wifi.

Anyways, thanks, Birdville Independent School District IT team, it's been quite fun, though it really would be nice if you'd unblock my site so that I can provide the services the district won't (computers (VMs) actually useful for tech students).


It seems like the only way to correctly use the network is to not use it! :/


I enjoyed this story. Thank you for sharing.


A bit of a shameless plugin here.

At Adaptive [1], we are building a data security infrastructure. In one of our products, we do SSH and various other protocols over HTTP3. It allows users to connect to databases, servers, and other resources over an outbound port. Similar to Ngrok and others but can be self-hosted You can access it in a passwordless manner or with temporary credentials, with maker-checker protection.

[1] https://adaptive.live/


the original shameless plugin was when the wrong guy took over uBlock and sold it out to advertisers. Thankfully, the original author gorhill relaunched the trustworthy uBlock Origin plugin.


The term "shameless plug" refers to a situation where someone promotes or advertises something, often themselves or their work, in a way that is seen as unabashed or lacking in modesty.


I suspect they already know, check your original post ;-)


I get the humour now. It is funny and my bad. It is plug and plugin.


I find simply having openssh listening on port 443 alone bypasses most firewalls in practice.


Nice, for some reason I never thought about the CONNECT method like a reverse proxy instead of a forward proxy.

However, CONNECT wasn't good enough for me. I did ssh over websocket to bust through a corp proxy (it inspected https connections with a custom CA).

I modified socat to serve my ssh server over websocket through apache. I also used it on the client end with openssh's ProxyCommand. I keep meaning to upload that patch, but there are other options around (websocat, for example).


Guess that is why many corporate proxies / firewalls block websocket by default...


Sounds like you accidentally reinvented huproxy


I used to use a tool that does exactly this nearly 20 years ago to poke a hole through corporate firewalls, corkscrew[0].

Nice standalone implementation and write up though.

0: https://github.com/bryanpkc/corkscrew


AIUI corkscrew works for a specific use case:

1. you're behind an HTTP proxy, and

2. the HTTP proxy support the CONNECT method

Around 20 years ago I did a short contract which had #1 but not #2. Thankfully, there's a tool for this, too. Of course it requires some set up on the server side:

https://github.com/larsbrinkhoff/httptunnel


In general, tunneling through HTTP2 turns out to be a great choice. There is an RPC protocol built on top of HTTP2: gRPC[1].

This is because HTTP2 is great at exploiting a TCP connection to transmit and receive multiple data structures concurrently - multiplexing.

There may not be a reason to use HTTP3 however, as QUIC already provides multiplexing.

I expect that in the future most communications will be over encrypted HTTP2 and QUIC simply because middlebox creators can not resist to discriminate. It may even be necessary to serve some random (perhaps AI generated) HTTP2/HTTP3 content to mitigate active probing[2].

[1] <https://grpc.io>

[2] <https://blog.torproject.org/learning-more-about-gfws-active-...>


I never understood the point of layering an RPC protocol on top of HTTP; HTTP is already itself a request/response protocol and can be used for RPC out of the box.

Whether it's HTTP 1, 2 or 3, it doesn't really make a difference. The evolutions of that protocol are themselves somewhat dubious, and designed to exploit things you wouldn't need in an RPC setting -- they're really designed for the open Internet, not a local service.


Because http is the new tcp. Nowadays protocols are primarily developed by companies making most of their bucks with http so…


Thankfully non-web software companies still exist.


In the case of SSH, there is a single connection (in fact SSH implements its own multiplexing), so I don't see the advantage of HTTP/2.


HTTP/2 is still TCP - and thus still suffers TCP head of line blocking.

HTTP/3 is over QUIC


I prefer the opposite, http over ssh.

Only half joking. I would love to have ssh equivalent identity management baked into a browser. I got all excited when first reading about hobo, which is a proposal for public key http auth. only to find out that not only did a server implementation not exist(i could work around this) but no client(browser) implementation exists(sort of, there is a javascript implementation, but that is not what I wanted.


    ssh -D “*:8080” host
This fires up a SOCKS proxy on port 8080. I use it all the time within Firefox. Legendary OG VPN.

It’s handy for nefarious use cases, but I also use it to access rmq dashboards on non-public networks in AWS.


"-D 8080" is easier to type, and listens only on localhost, which is likely what you wanted anyway.


> Legendary OG VPN

We had actual VPNs (that would also be blocked) long before OpenSSH included -D as an option.

I remember having to do multiples of -L in order to be able to successfully download a file over FTP through an SSH tunnel. Fun times. -D made life so much easier once that arrived.


> [...] FTP through an SSH tunnel. Fun times. -D made life so much easier once that arrived.

Hopefully also SFTP? Security was nice, but the real win was running like a normal application on a single port.


You can use this with socksify[1] to make any application SOCKS capable too. It was a godsend in days of old, for me.

[1] https://linux.die.net/man/1/socksify


Also tsocks


What about HTTPS client certificate?

https://techcommunity.microsoft.com/t5/iis-support-blog/clie...

I don't know enough about this to know if it meets your need but I've used them to authenticate to servers in college.


I've used client certificates for a while for my self-hosted stuff, and the UX is pretty terrible.

The moment you have a certificate loaded into your browser, every single tracker will see the availability of client side certificates as a means to do fingerprinting. Either you configure your browser to expose your identity to every website who asks, or you get popups for every other website asking you to pick a certificate.

Web browser could probably fix this, but client certificates are uncommon enough that I doubt they care anymore. Like HTTP basic auth (and its lack of password manager integration), it seems like this feature only remains for compatibility reasons.

Like usual, middleboxes also tend to fuck up client certificate based authentication because they can't effectively MitM those connections (they don't have the key material you're using, and while they can try to fake a website's TLS certificate in intranets, they can't fake your credentials to remote servers).

It's real unfortunate. They're still used, though; some Kubernetes networking tools automatically provision client certs to authenticate API clients within the cluster (as well as protect the traffic from snooping).


Could you provide some references please?

I use client certificates since they are required in my line of business and had the impression that they are only presented when asked for by the website, only I explicitly allow it.

Other than the sites that I know require them, I have never been asked to choose a certificate when browsing random websites (windows 11).


Oh, they're not always presented when asked! That'd be one hell of a privacy risk. I think there's a setting for IE/Edge that'll automatically present them for intranet websites, but public websites will prompt you first (unless you dig into the registry and override this behaviour).

Maybe the situation changed, or ad blockers have become better in the mean time, but last time I used them in Firefox, I was bombarded by client certificate requests browsing around the web.

The problem I have with the UX is that the certificate selection screen is a modal dialog that any URL seems to be able to bring up, and I found this abused in the wild. This, combined what made with the countless requests, made me move away from using them.

Another issue I struggled with was that every now and then I'd pick the wrong certificate and I couldn't for the life of me figure out how to correct this without restarting my browser and losing all my work. Switching between accounts for cert based auth was plain impossible.

Lastly there was the entirely unhelpful error state you can end up with when using client certs. Vague HTTPS errors that come down to "oh no something went wrong try reloading I guess" and weird side effect when certificates expired (from browsers sending old certificates to servers accepting expired certificates). The errors occur in the TLS layer, so you end up with authentication errors thst seemingly end up being handled as connection issues.

If you say these issues have been resolved, I should probably give mTLS another go.


FWIW I don’t use FF on the machine where I have the certificates installed, and before I setup network adblockers, had only clienside ones in chrome, and used edge mostly for the sites that needed them, for the past several years.

The UI can be clunky, especially if you set up the certificates to be stored in TPM, since then windows sometimes pops up the dialog in a way that’s easy to miss. Other than that Ive no complaints. Good luck!


They seem to work ok in Windows and Linux, but modern macOS seems to screw with them. At least the ones generated by OpenSSL. :(


That's disappointing to hear, what does macOS do to screw with them?


For issuing client certificates you need to set up your own Certificate Authority (not super hard, for a minimal thing it's just some "openssl" commands).

However, macOS has its own framework for doing https calls, which most macOS applications use. It takes the place of OpenSSL/LibreSSL (etc) as used on Linux.

That framework does things differently, which turns out to make life difficult:

* It seems to require the certificate for your custom certificate authority to be in the macOS keychain.

So instead of having a custom CA that can be used by just your one application when doing https calls to your own remote server... you have to install your (root) CA certificate "system wide". From rough memory, that's a potential security problem as allows your custom CA to generate certs for any domain that macOS would now accept.

* It seems to also impose it's own arbitrary standards on certificates.

It looks like anything with an expiration date of more than a year is automatically rejected.

Which for certificates embedded in applications that aren't released at least once per year (eg ours at sqlitebrowser.org) just outright kills the whole fucking thing regardless of anything else we could do.

There's no real workaround for the kind of idiocy that requires applications more than a year old not being allowed to work. :( :( :(

---

That's my rough memory of this stuff anyway, it's been quite a while since I last looked at it. Hopefully the above isn't too far off base. :)


Isn’t passkeys what you are describing?


Passkeys are at the application layer.


If only ssh://google.com was thing in browsers


In case you missed it last week, a nudge in a similar direction:

SSH3: SSHv2 using HTTP/3 and QUIC

https://news.ycombinator.com/item?id=38664729


https over https to avoid mitm?


No love for sslh?

“ Probes for HTTP, TLS/SSL (including SNI and ALPN), SSH, OpenVPN, tinc, XMPP, SOCKS5, are implemented, and any other protocol that can be tested using a regular expression, can be recognised. A typical use case is to allow serving several services on port 443 (e.g. to connect to SSH from inside a corporate firewall, which almost never block port 443) while still serving HTTPS on that port.”

https://www.rutschle.net/tech/sslh/README.html


The author dedicated the first half of the article to the choice between sslh and full encapsulation. They chose for full encapsulation because: * it's one fewer service to set up * the http server handles the connection, so remote address is correct in the logs * (presumably the hacker spirit favors novel solutions over existing ones)


So if I understand this correctly, apache + the proxy mod is doing the heavy lifting here correct? It receives a request to connect on port 22 to the ssh-server and, apparently, it is smart enough to just know that it needs to establish an ssh connection? I ask because I thought that CONNECT would result in tls connections only.

Edit: I should just have looked up the wikipedia example https://en.wikipedia.org/wiki/HTTP_tunnel


Long ago I used stunnel to establish encrypted connections to remotely access my sockets servers.

https://www.stunnel.org/

It didn't really matter what the protocol was, the client and server just see each other. IIRC it was also possible to connect a browser with https to a tunneled http server or viceversa.


I'm fairly certain it just opens a TCP connection on the clients behalf and proxies the clients data over it. No knowledge of SSH required.


Yes I think you are correct. Thanks for clarifying.


Slightly related, but mostly for fun:

You can do valid HTTP & SSH on the same port: https://media.ccc.de/v/bornhack2023-56142-sexy-ssh-hacks#t=4... (Without detecting which client connects, it works just like the "valid PNG and ZIP polyglot" trick)


In fact the article mentions a tool for this (sslh), but rejects it because it hides the source IP from the HTTP backend (and other reasons).


Dumb question, but couldn't you just tell the OpenSSH server to use port 80 or port 443 or something and just connect like `ssh me@host -p 80`?


Yes, that's likely to work on many firewalls, but:

- it means you can't also serve HTTP on those ports (so you'd need a dedicated IP address for SSH), and

- as @charcircuit wrote, it won't resist deep packet inspection.

(But if DPI is a problem and you have a spare IP address, you could just use SSH over TLS without needing the HTTP CONNECT stuff and Apache.)


That just changes the port it uses. It wouldn't change the protocol.


Fair, I guess I was not sure if it was just blocking the port, or doing something to directly block the protocols as well.


The author values having a robust solution since he mentions the DPI resistance near the end of the article. If he was going to mask ssh as https, he might as well do it properly.


> The hospital has free Wi-fi access. The caveat is that hospital blocks most connection types. … But SSH (TCP port 22 or most other custom ports) is blocked completely

Last time I was going to be somewhere when SSH as blocked but HTTPS was very open, and I couldn't rely on phone network connectivity, I stood up an instance of shellinabox¹ in case I needed to do remote admin while there.

This has the disadvantage of not allowing direct SSH access so I couldn't directly run scripts from local, use SFTP, or tunnel other stuff (like rsync) over SSH, but it's big plus was being able to use it from any machine.

Security could be a big issue with the shellinabox method. To mitigate this it ran in is own VM and the only thing I allowed it to do was SSH to a specific host to authenticate with a huge password (typed manually or via usb auto-typer on other machines, via keepass on my own), and then from that host I could SSH elsewhere. There was also some security-via-obscurity with the URL it was available on. Tunneling SSH would definitely have been more secure (allowing key based auth from my machines for a start) but I wanted that option to connect from machines (that were very locked down) other than my own. I took it back down as soon as the need had passed as it didn't feel 100% safe, bit it served a purpose.

--

[1] https://github.com/shellinabox/shellinabox


Why even bother with ssh/shellinabox? In similar circumstances I just did a straight and simple NAT, and that's it.


NATing what to what?

If simply connecting over a certain port was the answer I could have setup SSH on port 433 and been done without even that. Though I'm happy to be told I missed something obvious, in case I have the same need again in future.

Also just thinking about the SSH connection forgets the "want it to work on any machine" extra requirement I had. Being able to run an SSH client directly _at all_ was a concern.


Personally I use https://github.com/jpillora/chisel as a reverse Proxy through nginx, then connect through it using OpenVPN to bypass a similarly restrictive firewall. But this discussion is filled with other, similar hacks, I may have to try some of them.


I learned about chisel in PEN-200 / preparing for the OSCP.

Then I learned about, Ligolo-ng [1] which is a game-changer. I highly recommend checking it out. It is most applicable to a penetration test. It uses TLS so I'm not sure it could be used to address the issue mentioned in the article.

[1] https://github.com/nicocha30/ligolo-ng


Reminds me of when I was in high school, I made a port scan script to find unblocked IPs for me to route my server SSH through. I was able to get it working through a port thats supposed to be used for printers. Still have SSH on the same port to this day.


I guess you need a valid SSL certificate to do this? So it may not work well if you SSH into an IP instead of a domain -- it is not always possible to get an SSL certificate for an IP address ( https://stackoverflow.com/questions/2043617/is-it-possible-t... ).

It is probably trivial for people here to get a domain and then point it to the IP, but still, this seems a minor limitation.


There is no need for a valid SSL cert, self signed will do. The ProxyCommand just needs to not validate the cert.


You could probably use a self-signed cert, then configure socat either to trust that certificate (with the cafile option) or to disable verification (with the verify option)


Nice. My solution involved more code: https://github.com/ThomasHabets/huproxy


Didn’t need to invent a new wheel here. IP tunneling exists already over HTTPS so simply set that up and route literally any protocol over HTTPS.


You should try httpssh

https://pkg.go.dev/github.com/HimbeerserverDE/httpssh

> httpssh listens for HTTP(S) and SSH connections on the same port and forwards the traffic to the corresponding service.


The article mentions sslh which works like httpssh. It also possible to achieve this with haproxy like this:

    frontend https
      bind *:443
      mode tcp
      tcp-request inspect-delay 2s
      use_backend https_loopback if { req.ssl_ver gt 0 }
      default_backend ssh
Another option is to use TLS SNI (Server Name Indication) like a virtualhost-style ssh server (https://www.haproxy.com/blog/route-ssh-connections-with-hapr...). You can use the 'openssl' command in your ProxyCommand which is pretty readily available.


I’m in a similar shoe. Will be spending few days in hospital next week, though going with a different approach: just added tailscale on my devices, my firewall (pfsense), and few other servers I have in case I can’t get pfsense rules to direct the traffic correctly to anything else.


Funny timing, I just did HTTP over SSH the other day: https://gist.github.com/kissgyorgy/9e58881131aeea51ed0a2c8bb...


I use Cloudflare Tunnel to SSH into home from the outside over HTTPS. Advantages:

- Can open a shell from any device with a browser, no ssh required;

- Works even if home network is behind CGNAT.

Disadvantages:

- A middleman;

- Other people can open a shell if my Cloudflare auth is fully compromised (requires compromising a high security email inbox).


We used Cloudflare as well. I did not think compromised email would affect this as Cloudflare requires 2FA (and the only way around that are backup codes).


Cloudflare admin account login and Cloudflare Zero Trust app (including SSH access through Cloudflare tunnel) logins are different. IIRC the only login method I could configure for the web shell on my personal account is one-time PIN via email.


Why block outbound SSH connections? Inbound I understand. And given how easy it is to tunnel over HTTPS this seems to be targeted an accidental or unsophisticated clients - neither of which seems to be a likely descriptor of an SSH connection.


This is most likely just a security "best practice" that goes unquestioned. It is pretty much on the same level as blocking ICMP.

I remember in the past how I lost couple of hours trying to understand why I can't reach a remote instance via RDP, only to finally figure it was the port blocking on the office firewall.

I spent quite a while trying to convince the local sysadmin that this doesn't make any sense. His only explanation was "security" without any actual explanation of what is he securing us from.


Russian and Chinese ISPs are blocked on out network in the name of "security" as if real attackers would not use AWS or a western VPS provider.

It is a pain when you need to download material from your hardware suppliers' web sites.


To be fair to those ISPs, most SSH connection attempts in my logs seem to come from consumer IP addresses. Many of them in China, but also a whole bunch in South America. I think it's because of hacked IoT crapware and other malware adding unsuspecting people to a botnet. I doubt anyone is renting hundreds of internet connections just to brute force admin/admin all day.

I can't imagine those South Americans being able to access anything without clearing twenty Cloudflare CAPTCHAs (I don't think Cloudflare operates in China), but then again, I think they'll just blame Cloudflare for sabotaging their perfectly safe 100% legit web traffic.

This type of blocking wouldn't be necessary if these companies actually did something with the abuse reports they receive, but I guess blocking SSH and telnet is an easy way to stop the flood of reports coming in if you don't care about your customers.


Regarding “security”, consider that at one point in history there were no known security exploits for <whatever piece of software>, and so simply asking “why not” may have only elicited shrugs.

After a few decades of having your software’s ass handed to it though, is it so unreasonable to stop asking “why not” and instead start asking “why the fuck should we?”


They just blanket block every port that isn't 80 or 443. I see this everywhere.


But why?


security


Usually everything except HTTP/HTTPS is blocked and HTTPS is MIMed and traffic analyzed by some network security product


This is a really nice idea! Excellent encapsulation with minimal setup. You could slap a SOCKS proxy (via -D) on top of that for even more usability (although, oddly I find that these days all I ever need is DNS, HTTPs and SSH...)


Thats what was/is what protonet used/s while marking the boxes available via $randomTLd but also making it accessible for support or „special services“.

Always nice to see these kind of „missiues“ ;)


FWIW I used to use Anyconnect (OpenConnect for the FOSS version) for this exact reason.

Anyconnect VPN looks like HTTPS traffic and is very difficult to block, even with DPI.

Worth looking into if you need this commonly. :)


I have always kept an SSTP VPN server handy after experiencing this exact scenario of hospital WiFi that only allows HTTPS traffic, but this is an interesting alternative


I used Parallels and a browser to remote desktop to my home Windows machine from behind a very restrictive firewall. Not sure if it works for Linux.


The title perfectly captures the state of the art. How most people think of the relationship between the two protocols. I am glad I know the truth.


This is so useful in India where several cafe hotspots allow only 80 and 443. Currently I use gotty, but I'll give SSH over HTTPS a try.


maybe a stupid question: what happens to access to other usual https:// pages on the server? How do i get there if whole https traffic goes to ssh? Or.. just use plain http for those?

that is.. can i only route some the.server/go-to-ssh/ endpoint to ssh, and all other stuff stays and is accessible as is?


See also AWS SSM sessions. You can tunnel an SSH session over an SSM session using the ProxyCommand interface.


I had the same need to tinker from work, who blocked many ports. I used wireguard on port 21 (ftp). Worked well.


I just keep a server (3 euro/month) online that listens to SSH on port 443. From there I can connect to anywhere.

I assume that a firewall exists that blocks ssh over 443 while allowing HTTPS, but have not encountered that yet.


The frequency of posts dealing with "new" solutions like this bothers me. Are none of these guys older than 20yrs or is just AI bots? Staggeringly amateurish.


I went through an exploratory stage 16 years ago of this. It may be that more and more networks are opting for vendors who do not, in fact, offer 'internet' connections, but rather relayed proxies of various sorts, breaking things. After my bout, it kinda faded as I realized SSH-ing over almost any network was really a shoo-in. Then recently post covid i ran into similar blocks myself, so it may be on the rise with fortinet et al.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: