Hacker News new | past | comments | ask | show | jobs | submit login
Testing if a port can be reached, using built-in tools other than ol' telnet (carehart.org)
122 points by rmason on Oct 7, 2023 | hide | past | favorite | 83 comments



Ha! This was probably the first serious problem I ever tackled with an open source contribution!

The year was 2002, the 2.4 Linux kernel had just been released and I was making money on the side building monitoring software for a few thousand (mostly Solaris) hosts owned by a large German car manufacturer. Everything was built in parallel ksh code, “deployed” to Solaris 8 on Sun E10Ks, and mostly kicked off by cron. Keeping total script runtime down to avoid process buildup and delay was critical. The biggest offender: long timeouts for host/port combinations that would sporadically not be available.

Eventually, I grabbed W. Richard Stevens’ UNIX network programming book and created tcping [0]. FreeBSD, NetBSD, a series of Linux distros picked it up at the time and it was steady decline from there… good times!

[0]: https://github.com/mkirchner/tcping

edit: grammar


I worked for Rich when he was writing UNP. Seeing comments like this 30+ years later reminds me how fortunate I was to spend time with him early in my career.


The syntax is different and I don't remember who compiles the binaries, but as a primarily Windows sysadmin, tcping is literally the first thing I put in my %PATH% on a new jumpbox; even ahead of dig, believe it or not.

(After that it's Russinovich's sysinternala suite.)


Awesome, I think I had the same book in 1990s. Things were so simple back then, when the default approach to anything was to start writing some C code with socket calls.


RIP WRS. Such a fantastic book.


When I was reading his books I used the utility he wrote, sock.

https://github.com/keyou/sock


How is this different from just using netcat?

Netcat even has port scanning capabilities.


Not sure if I'd like to send anything into tested port just to tell if it is open. I'd rather stick with telnet or whatever is available.

Also worth mentioning that bash also has network capabilities, you can check port like this:

  : < /dev/tcp/google.com/80 && echo OK || echo ERROR


that is so cool, my god! Bash is this awesome underrated thing (relative to how I perceive it being used). It's so cool that there's basically a virtual filesystem that maps the internet to your disk...haha.

This even works on my Mac.

But one issue I encountered was timeout. I just tried:

  : < /dev/tcp/google.com/8080 && echo OK || echo ERROR
and it hung. So I asked ChatGPT and it suggested

  timeout 5 bash -c ': < /dev/tcp/google.com/8080' && echo OK || echo ERROR


There isn't a virtual filesystem; bash just pretends there is. You can't use the /dev/tcp path in your own program as it doesn't exist.


Unless...I program in bash! ^^ haha :)

Yeah, I didn't know that. I definitely had a suspicion it was how you say, and "basically" was my fudge word, because I didn't know. I hadn't thought much about it, but I did not know for sure. Thanks for tellin me.

I've thought about this idea: mapping the internet to the Unix filesystem, a la proc. I know TabFS, but I think the mapping could be better.

I think it's an interesting problem to consider what a good mapping would be. But it's all tradeoffs I guess. I haven't thought that much about it. But it did seem interesting.

Maybe not interesting enough to really commit to, because it seemed kinda like a big project. But definitely interesting to consider. What do you think? What kind of mapping would you do?


Your concern about not wanting to send bytes is totally valid.

As it happens (if memory serves) telnet can send some bytes on connection also, in attempting to negotiate terminal settings with the remote “telnet server”. That said the /dev/tcp trick is indeed great for bash though!


This is what I was trying to remember, I've seen this before but never understood it. What does the colon do?


: is a special built-in that always succeeds and always returns an exit of 0. You typically use it when you need a command for syntax reasons, but don't actually need/want to run a command.


Example?


it makes the smiley face look very sad : < while giving an exit status of zero (in this case only if bash can resolve the input redirection by opening that tcp port)


it's equivalent to 'true'. It returns a successful value


> Not sure if I'd like to send anything into tested port just to tell if it is open.

can you elaborate? From a stealth perspective, probing for a port will always reveal to the other side that they are being probed


It's not about stealth, it's about a bad habit of sending some bytes without thinking how it will be interpreted.

Examples:

- Syslog-ng server will interpret it as valid log messages

- HP JetDirect on port 9100 will print HTTP request: https://twitter.com/AviKivity/status/1405147699557638145

- What if service is protected by fail2ban and accounting protocol errors?

After all, sending anything was not my intention, so I won't send.


Could crash the server if it's badly written


This is very useful, will save me some hassle. Thanks!


I’ve never used ‘curl host:port’ but use ‘curl -v telnet://host:port’ all the time on Linux boxes that don’t happen to have telnet installed. Perhaps he omits the telnet on the curl because I believe the windows curl doesn’t support it..


Most developers will have git and git bash installed nowadays which support standard cURL (along with all the basic useful bash utilities (sed, find, grep, tr, sort, uniq, etc.) We’ve given up on writing anything windows specific at work as the format and options are pretty limited. The most we will do is call setx or similar if we need to setup a user environment variable in our scripts. It has saved so much of our sanity and time as bash scripts are so much easier to write and maintain.


Kind of unrelated but here's a little bit of bash I use for checking this locally:

  # Check if the port is available
  is_port_free() {
    if [[ "$1" =~ ^[0-9]+$ ]] && [ "$1" -ge 1024 ] && [ "$1" -le 65535 ]; then
      echo "Valid port number." >&2
    else
      echo "Invalid port number." >&2
      return 1
    fi

    if netstat -lnt | awk '$6 == "LISTEN" && $4 ~ ":'$1'$" {exit 1}'; then
      # Return a truthy value if the port is available
      return 0
    else
      # Return a falsey value if the port is in use
      return 1
    fi
  }


netstat seems to be disappearing from some linux distros - pretty sure it's not on my current Ubuntu by default, so I have to use 'ss' instead (which does seem to be quicker).


Cool, thanks for the tip! Yeah, I noticed I had to install netstat some times. Maybe I'll update my scripts to use ss instead. Do you have the conversion? I guess I could ask ChatGPT, but...while you're here? :) heh


I use curl inside a lot of Linux containers for this matter because curl is more frequently installed, and you cannot easily install new things unless you're root, which is not possible in many cases.


> Another beneficial difference of this over the old telnet approach is that with that, a successful connection would show a blank screen if the connection, awaiting commands. You'd have to know telnet keystrokes and commands to break out of that

Not when it says "Escape Character is 'CTRL+]'" every time you run it.

Or does that depend on how you run it with the windows version?


I've tried many times but never succeeded breaking out of a telnet session using that key combo. Perhaps my swedish keyboard layout is to blame, but that combo is simply not working. I've similar issues with the espressif idf.py tool too...


I worked at a very locked down environment without possibility of installing other tools and I found that the openssl command was always available. $ openssl s_client -connect <IP>:<PORT>


I use something like this using python, since python in its simple form is available on most distro/container by default and should be robust enough:

python3 -c "import socket, sys; host, port = sys.argv[1], 80; s = socket.socket(socket.AF_INET, socket.SOCK_STREAM); s.settimeout(10); result = s.connect((host, port)); s.close(); exit(result == 0)" unknownlxlaxkck.com 2>/dev/null && echo connected || echo fail


The traceroute command on modern Linux and macOS supports TCP, which I've found to be quite useful since you can see where in the path a connection is failing.


I’m surprised there is no mention of netcat (nc).


Yes

    nc -vz host port
is my go-to.


and, the -u option lets you send and receive UDP!


Netcat got replaced by socat, right?


I use socat for serial-line things (like simulating a serial port for unit testing). I didn't occur to me that it can be a netcat replacement.


There is near the bottom


Oops, so there is!


hping3 is a much under-rated utility for all such needs ;)


`nc -Z x.x.x.x nnnn` works well enough for me.


If netcat is installed


I sometimes wonder why we still use port numbers. Wouldn't it make much more sense to use strings to name ports?


The wire format for UDP and TCP specifies 16 bit integer values. As another poster mentioned, you can uses OS level services to map a string to well-known ports, but encoding ports as strings at the wire level would've introduced a lot of serialization complications and performance concerns for parsing packets.

For ephemeral ports, it isn't clear what value there would be for a string identifier vs. the fixed-width integer.


/etc/services assigns a string to every port number.

It's ancient, but still present on every Linux and MacOS system.


Yeah, but it's still port numbers under the hood.

The big downside of this is that you can get clashes. For example, VNC uses 5901 and up on my system. What if I have some other service that uses ports in the same range?

A more sane approach would be to call those ports e.g. vnc/work, vnc/meeting, etc. so there would be no conflicts and you would know what each port is used for. And it would work even if you don't have write-access to /etc/services.


The reason this hasn't been done is that it would require changes to the headers of TCP and UDP, which would be a massive undertaking (for similar reasons to why IPv4 -> IPv6 is such a pain).

Would both the source and destination ports both be strings? Normally, for a client->server packet, the source port is meaningless, so what string would you use? Or would your new protocol support both string ports and integer ports?

Having ports be strings in the TCP header would make the header considerably longer - supposing your protocol had one fixed length 32 ASCII char string port, you'd be looking at something in the region of a 2x increase in header length.


Couldn't the port string be mapped to a temporary port number when the TCP connection is initiated? It'd still be necessary for UDP though.

It'd still be a massive undertaking though.


Yes, and there's already mechanisms for doing just that - look up DNS SRV records: https://en.m.wikipedia.org/wiki/SRV_record


> for similar reasons to why IPv4 -> IPv6 is such a pain

Much less painful though because only the endpoints would need to know.


> A more sane approach would be to call those ports e.g. vnc/work, vnc/meeting, etc. so there would be no conflicts and you would know what each port is used for.

More sane, but absolutely slow. Every gateway, NAT, firewall, router and switch between two devices would need 15x more RAM, and even with that RAM would need to parse strings, deal with unicode, etc. All of those are slow operations.

Using 16-bit numbers makes identification and routing of packets very quick.


It would also be faster if we referred to files by their inode numbers. But we have filenames, and I don't want to speak for everybody but I think people like them.


> It would also be faster if we referred to files by their inode numbers. But we have filenames, and I don't want to speak for everybody but I think people like them.

Actually, we do refer to files by their inode number, not by their name. The system uses the name to lookup the inode number, and then use the inode number.

Maybe I misunderstood you initially[1], but you weren't proposing to keep the numbers and use a lookup service for the name. AIUI, you proposed to replace the number with a name, no?

In which case, absolutely no one is proposing to replace inode numbers with names, so inode numbers/filenames as an example does not validate or lend support to your proposal for replacing port numbers with names.

[1] I'm on a personal quest to stop misunderstanding people in a way that reduces the strength of their argument. It's not going as well as I though it would, and I still do it sometimes.


Well, frankly I don't care how the ports are implemented, by numbers would be fine. As long as it means that from the user's point of view they are names. A config file like /etc/services is too simple because I create ports all the time, even from scripts, and I don't want to always become root to change the config; and other systems that want to connect to ports on my machine can't read /etc/services on my machine.


If you want to do a lookup translation and only use names on your local machine, it's only an hour of work or so to create a program that modifies the services file, and then make that program setuid root.

However it only works on your machine.

If you want to have it work on your local machine, DNS SRV records on your local network would work.

The proposal to use names and not numbers breaks down when you want the rest of the world to follow suit, because it's not technically possible within IP, only on top of IP.


> The proposal to use names and not numbers breaks down when you want the rest of the world to follow suit, because it's not technically possible within IP, only on top of IP.

This is exactly my point.


> > The proposal to use names and not numbers breaks down when you want the rest of the world to follow suit, because it's not technically possible within IP, only on top of IP.

> This is exactly my point.

In which case, refer to my original response to your proposal - it's not practical to slow down every networking device in the world by a large factor.

It's like asking "why aren't we commuting at 1/3 the speed of light?": it's neither feasible nor practical.

Sure, it's possible, in that the physics involved make it possible, but not practical, because the limiting factor is not the physics involved.


Surely we can build some protocol on top of IP. I see people mention SRV records. These seem useful, however not very user-friendly at this point, so people don't use them.


DNS has SRV records. That is all you need, that is your solution, it exists. That they are not widely adopted in the manner you propose may be unfortunate, but completely upending protocols doesn't seem like a better or more realistically adopted solution either.


I don’t see much value. Each packet has a source port as well as a destination port. The source port is usually random and is the port the server responds to. Proxies, nats, firewalls all handle port numbers very efficiently and the tcp and udp headers themselves only use 16-bits + 16-bits on the wire. These are aligned to a 32-bit boundary which helps efficient hardware and software parsing. There’s no variable length to worry about and waste more bytes on. There’s no string encodings like Utf-8.



I wish SRV records were more widely adopted, but they are still just a (string -> integer) mapping process above the 16-bit integer values on the wire. Different layers.


And? That's primarily what DNS is for: mapping strings to numbers.


I was just trying to clarify some muddled discussion about different layers of the networking stack. The original comment about port names instead of port numbers didn't seem to be made with an understanding of the different layers.



I built a tool for this purpose, inspired by a comment on lobste.rs[0].

You can check a port like this, saves a few round trips on a good day.

    $ dstp google.com --port 80

    Ping: 27.44ms
    DNS: resolving 216.58.212.14
    SystemDNS: resolving 2a00:1450:4017:804::200e, 142.250.187.174
    TLS: certificate is valid for 64 more days
    HTTPS: got 200 OK


[0]: https://github.com/ycd/dstp#motivation


netcat, (v)erbose, (z)ero payload with 1 second timeout:

    $ nc -vzw1 host port


IIRC there's also `ftp` command shipped with every Windows.


Windows can install full Linux distro via WSL. You can have a nmap in two minutes!

I like it much better than MacBook with its incompatible ARM cpu, and outdated commands from some ancient BSD!


GNU tools are generally one command away on macOS. So is nmap, and it takes a lot less than 2 minutes to have it running directly on the host. What's ARM incompatiblw with? Certainly not Linux distros, certainly not GNU tools, and certainly not networking. Not to mention that for most things that don't run native, Rosetta2 makes them work at near native performance.

You can also run Linux on a VM if you want. With the native macOS virtualisation stack in UTM it takes about 2 seconds to have a full Linux VM up and running.


It's been a year or two but last time I tried networking tools like nmap in WSL I would get all kinds of errors.


I was getting errors from ping. Update fixed that, my guess is WSL2 is much better


Big fan of using Orbstack for this. Full Linux VM with transparent access to Mac disk. I love it.


you can install gnu utils on a mac if you’d rather use them.


Yes ever since the M1 chip it’s just annoying to work on the M1. Day to day it doesn’t matter much but it wastes days of times the few times i ran into an issue. I wish companies would stop defaulting to macs for development


Would not work at a company who only offered macs.


Can someone enumerate these problems? I’ve been on an M1 for almost 2yrs and I’m apparently missing out on what makes it a bad experience.


Some X86 binaries do not run on M1, big deal in ERP. My collages spend like two weeks trying to emulate some older server.

It has little RAM, small disk... so you depend on cloud for build. No Nvidia GPU for CUDA... Some USB externals do not work...


Depends on which model you get and how you configure it. You can easily get up to 96GB of RAM and 8TB of SSD, if that's what you really want. It's damn bloody expensive to do that, but the possibility is there.

So, you can't legitimately say that the memory is always low and the disk is always small. That's just a configuration thing. If you don't configure it right, then I don't hold out much sympathy for you.


What?


He, I asked for Windows laptop. I am the only one in like 30 people.

My laptop has 64GB RAM and 2TB ssd. And unlike Mac, it drives 4x4k displays via USB4 hub. Even with all those extras, it costs less than Macbook, and I have some hardware budget still available:)

And it came with unlocked bios, no spy software... Windows are basically unsupported by our IT...


Depends on what Mac you get, but 64GB of RAM and 2TB of SSD with support for four 4k displays is pretty easy with the current MacBook Pro. It might be more expensive than your garden variety Windows laptop, or maybe not if you find one of the great deals that are frequently offered by B&H Photo or Adorama.

No spy software will be installed, and the latest version of macOS is making it harder and harder for people to create malware that can easily infect Macs.


`brew install nmap`




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: