Ha! This was probably the first serious problem I ever tackled with an open source contribution!
The year was 2002, the 2.4 Linux kernel had just been released and I was making money on the side building monitoring software for a few thousand (mostly Solaris) hosts owned by a large German car manufacturer. Everything was built in parallel ksh code, “deployed” to Solaris 8 on Sun E10Ks, and mostly kicked off by cron. Keeping total script runtime down to avoid process buildup and delay was critical. The biggest offender: long timeouts for host/port combinations that would sporadically not be available.
Eventually, I grabbed W. Richard Stevens’ UNIX network programming book and created tcping [0]. FreeBSD, NetBSD, a series of Linux distros picked it up at the time and it was steady decline from there… good times!
I worked for Rich when he was writing UNP. Seeing comments like this 30+ years later reminds me how fortunate I was to spend time with him early in my career.
The syntax is different and I don't remember who compiles the binaries, but as a primarily Windows sysadmin, tcping is literally the first thing I put in my %PATH% on a new jumpbox; even ahead of dig, believe it or not.
(After that it's Russinovich's sysinternala suite.)
Awesome, I think I had the same book in 1990s. Things were so simple back then, when the default approach to anything was to start writing some C code with socket calls.
that is so cool, my god! Bash is this awesome underrated thing (relative to how I perceive it being used). It's so cool that there's basically a virtual filesystem that maps the internet to your disk...haha.
This even works on my Mac.
But one issue I encountered was timeout. I just tried:
: < /dev/tcp/google.com/8080 && echo OK || echo ERROR
Yeah, I didn't know that. I definitely had a suspicion it was how you say, and "basically" was my fudge word, because I didn't know. I hadn't thought much about it, but I did not know for sure. Thanks for tellin me.
I've thought about this idea: mapping the internet to the Unix filesystem, a la proc. I know TabFS, but I think the mapping could be better.
I think it's an interesting problem to consider what a good mapping would be. But it's all tradeoffs I guess. I haven't thought that much about it. But it did seem interesting.
Maybe not interesting enough to really commit to, because it seemed kinda like a big project. But definitely interesting to consider. What do you think? What kind of mapping would you do?
Your concern about not wanting to send bytes is totally valid.
As it happens (if memory serves) telnet can send some bytes on connection also, in attempting to negotiate terminal settings with the remote “telnet server”. That said the /dev/tcp trick is indeed great for bash though!
: is a special built-in that always succeeds and always returns an exit of 0. You typically use it when you need a command for syntax reasons, but don't actually need/want to run a command.
it makes the smiley face look very sad : <
while giving an exit status of zero (in this case only if bash can resolve the input redirection by opening that tcp port)
I’ve never used ‘curl host:port’ but use ‘curl -v telnet://host:port’ all the time on Linux boxes that don’t happen to have telnet installed. Perhaps he omits the telnet on the curl because I believe the windows curl doesn’t support it..
Most developers will have git and git bash installed nowadays which support standard cURL (along with all the basic useful bash utilities (sed, find, grep, tr, sort, uniq, etc.) We’ve given up on writing anything windows specific at work as the format and options are pretty limited. The most we will do is call setx or similar if we need to setup a user environment variable in our scripts. It has saved so much of our sanity and time as bash scripts are so much easier to write and maintain.
Kind of unrelated but here's a little bit of bash I use for checking this locally:
# Check if the port is available
is_port_free() {
if [[ "$1" =~ ^[0-9]+$ ]] && [ "$1" -ge 1024 ] && [ "$1" -le 65535 ]; then
echo "Valid port number." >&2
else
echo "Invalid port number." >&2
return 1
fi
if netstat -lnt | awk '$6 == "LISTEN" && $4 ~ ":'$1'$" {exit 1}'; then
# Return a truthy value if the port is available
return 0
else
# Return a falsey value if the port is in use
return 1
fi
}
netstat seems to be disappearing from some linux distros - pretty sure it's not on my current Ubuntu by default, so I have to use 'ss' instead (which does seem to be quicker).
Cool, thanks for the tip! Yeah, I noticed I had to install netstat some times. Maybe I'll update my scripts to use ss instead. Do you have the conversion? I guess I could ask ChatGPT, but...while you're here? :) heh
I use curl inside a lot of Linux containers for this matter because curl is more frequently installed, and you cannot easily install new things unless you're root, which is not possible in many cases.
> Another beneficial difference of this over the old telnet approach is that with that, a successful connection would show a blank screen if the connection, awaiting commands. You'd have to know telnet keystrokes and commands to break out of that
Not when it says "Escape Character is 'CTRL+]'" every time you run it.
Or does that depend on how you run it with the windows version?
I've tried many times but never succeeded breaking out of a telnet session using that key combo. Perhaps my swedish keyboard layout is to blame, but that combo is simply not working. I've similar issues with the espressif idf.py tool too...
I worked at a very locked down environment without possibility of installing other tools and I found that the openssl command was always available.
$ openssl s_client -connect <IP>:<PORT>
The traceroute command on modern Linux and macOS supports TCP, which I've found to be quite useful since you can see where in the path a connection is failing.
The wire format for UDP and TCP specifies 16 bit integer values. As another poster mentioned, you can uses OS level services to map a string to well-known ports, but encoding ports as strings at the wire level would've introduced a lot of serialization complications and performance concerns for parsing packets.
For ephemeral ports, it isn't clear what value there would be for a string identifier vs. the fixed-width integer.
The big downside of this is that you can get clashes. For example, VNC uses 5901 and up on my system. What if I have some other service that uses ports in the same range?
A more sane approach would be to call those ports e.g. vnc/work, vnc/meeting, etc. so there would be no conflicts and you would know what each port is used for. And it would work even if you don't have write-access to /etc/services.
The reason this hasn't been done is that it would require changes to the headers of TCP and UDP, which would be a massive undertaking (for similar reasons to why IPv4 -> IPv6 is such a pain).
Would both the source and destination ports both be strings? Normally, for a client->server packet, the source port is meaningless, so what string would you use? Or would your new protocol support both string ports and integer ports?
Having ports be strings in the TCP header would make the header considerably longer - supposing your protocol had one fixed length 32 ASCII char string port, you'd be looking at something in the region of a 2x increase in header length.
> A more sane approach would be to call those ports e.g. vnc/work, vnc/meeting, etc. so there would be no conflicts and you would know what each port is used for.
More sane, but absolutely slow. Every gateway, NAT, firewall, router and switch between two devices would need 15x more RAM, and even with that RAM would need to parse strings, deal with unicode, etc. All of those are slow operations.
Using 16-bit numbers makes identification and routing of packets very quick.
It would also be faster if we referred to files by their inode numbers. But we have filenames, and I don't want to speak for everybody but I think people like them.
> It would also be faster if we referred to files by their inode numbers. But we have filenames, and I don't want to speak for everybody but I think people like them.
Actually, we do refer to files by their inode number, not by their name. The system uses the name to lookup the inode number, and then use the inode number.
Maybe I misunderstood you initially[1], but you weren't proposing to keep the numbers and use a lookup service for the name. AIUI, you proposed to replace the number with a name, no?
In which case, absolutely no one is proposing to replace inode numbers with names, so inode numbers/filenames as an example does not validate or lend support to your proposal for replacing port numbers with names.
[1] I'm on a personal quest to stop misunderstanding people in a way that reduces the strength of their argument. It's not going as well as I though it would, and I still do it sometimes.
Well, frankly I don't care how the ports are implemented, by numbers would be fine. As long as it means that from the user's point of view they are names. A config file like /etc/services is too simple because I create ports all the time, even from scripts, and I don't want to always become root to change the config; and other systems that want to connect to ports on my machine can't read /etc/services on my machine.
If you want to do a lookup translation and only use names on your local machine, it's only an hour of work or so to create a program that modifies the services file, and then make that program setuid root.
However it only works on your machine.
If you want to have it work on your local machine, DNS SRV records on your local network would work.
The proposal to use names and not numbers breaks down when you want the rest of the world to follow suit, because it's not technically possible within IP, only on top of IP.
> The proposal to use names and not numbers breaks down when you want the rest of the world to follow suit, because it's not technically possible within IP, only on top of IP.
> > The proposal to use names and not numbers breaks down when you want the rest of the world to follow suit, because it's not technically possible within IP, only on top of IP.
> This is exactly my point.
In which case, refer to my original response to your proposal - it's not practical to slow down every networking device in the world by a large factor.
It's like asking "why aren't we commuting at 1/3 the speed of light?": it's neither feasible nor practical.
Sure, it's possible, in that the physics involved make it possible, but not practical, because the limiting factor is not the physics involved.
Surely we can build some protocol on top of IP. I see people mention SRV records. These seem useful, however not very user-friendly at this point, so people don't use them.
DNS has SRV records. That is all you need, that is your solution, it exists. That they are not widely adopted in the manner you propose may be unfortunate, but completely upending protocols doesn't seem like a better or more realistically adopted solution either.
I don’t see much value. Each packet has a source port as well as a destination port. The source port is usually random and is the port the server responds to. Proxies, nats, firewalls all handle port numbers very efficiently and the tcp and udp headers themselves only use 16-bits + 16-bits on the wire. These are aligned to a 32-bit boundary which helps efficient hardware and software parsing. There’s no variable length to worry about and waste more bytes on. There’s no string encodings like Utf-8.
I wish SRV records were more widely adopted, but they are still just a (string -> integer) mapping process above the 16-bit integer values on the wire. Different layers.
I was just trying to clarify some muddled discussion about different layers of the networking stack. The original comment about port names instead of port numbers didn't seem to be made with an understanding of the different layers.
GNU tools are generally one command away on macOS. So is nmap, and it takes a lot less than 2 minutes to have it running directly on the host. What's ARM incompatiblw with? Certainly not Linux distros, certainly not GNU tools, and certainly not networking. Not to mention that for most things that don't run native, Rosetta2 makes them work at near native performance.
You can also run Linux on a VM if you want. With the native macOS virtualisation stack in UTM it takes about 2 seconds to have a full Linux VM up and running.
Yes ever since the M1 chip it’s just annoying to work on the M1. Day to day it doesn’t matter much but it wastes days of times the few times i ran into an issue. I wish companies would stop defaulting to macs for development
Depends on which model you get and how you configure it. You can easily get up to 96GB of RAM and 8TB of SSD, if that's what you really want. It's damn bloody expensive to do that, but the possibility is there.
So, you can't legitimately say that the memory is always low and the disk is always small. That's just a configuration thing. If you don't configure it right, then I don't hold out much sympathy for you.
He, I asked for Windows laptop. I am the only one in like 30 people.
My laptop has 64GB RAM and 2TB ssd. And unlike Mac, it drives 4x4k displays via USB4 hub. Even with all those extras, it costs less than Macbook, and I have some hardware budget still available:)
And it came with unlocked bios, no spy software... Windows are basically unsupported by our IT...
Depends on what Mac you get, but 64GB of RAM and 2TB of SSD with support for four 4k displays is pretty easy with the current MacBook Pro. It might be more expensive than your garden variety Windows laptop, or maybe not if you find one of the great deals that are frequently offered by B&H Photo or Adorama.
No spy software will be installed, and the latest version of macOS is making it harder and harder for people to create malware that can easily infect Macs.
The year was 2002, the 2.4 Linux kernel had just been released and I was making money on the side building monitoring software for a few thousand (mostly Solaris) hosts owned by a large German car manufacturer. Everything was built in parallel ksh code, “deployed” to Solaris 8 on Sun E10Ks, and mostly kicked off by cron. Keeping total script runtime down to avoid process buildup and delay was critical. The biggest offender: long timeouts for host/port combinations that would sporadically not be available.
Eventually, I grabbed W. Richard Stevens’ UNIX network programming book and created tcping [0]. FreeBSD, NetBSD, a series of Linux distros picked it up at the time and it was steady decline from there… good times!
[0]: https://github.com/mkirchner/tcping
edit: grammar