What's interesting about this is just how bad it is at tying this to the modern world. For instance, just after discussing how the "AT" Hayes command prefix has useful properties for automatic synchronisation of line speed:
"That property is still useful, and thus in 2017 the AT convention has survived in some interesting places. AT commands have been found to perform control functions on 3G and 4G cellular modems used in smartphones."
But modern modems aren't connected over serial. There's no concept of line speed. This property is entirely irrelevant to modern hardware. A much more plausible explanation is simply that extending the interface made it easier to extend existing codebases to new contexts.
"IoT devices still speak RS-232"
This is actually dangerously untrue! RS-232 used positive voltages for 0 and negative voltages for 1, between 3 and 15 volts each. Attaching RS-232 to a modern IoT device's serial interface is likely to kill the device. What is common between old-school serial and the serial ports on modern devices is stuff that's out of the scope of RS-232 (eg, the 8N1 framing isn't defined by RS-232), and using the RS-232 terminology to describe it is about as accurate as calling it RS-422.
"If you know what UTF-8 is (and you should) every ASCII file is correct UTF-8 as well."
I mean kind of? ASCII is a character set, it doesn't define the on-disk representation. An ASCII file saved with each character packed into 7 bits isn't going to be valid UTF-8 without some manipulation. This is just an odd claim to make given the earlier discussion of varied word lengths and transfer formats that weren't 8-bit clean.
"But in 2005 Linus Torvalds invented git, which would fairly rapidly obsolesce all previous version-control systems"
(cries in Perforce)
Git was a huge improvement over CVS and SVN. Claiming that it rendered everything that came before it obsolescent just suggests massive ignorance of chunks of the software industry.
And yes this is all ridiculously pedantic, but given the entire tone of the article is smugly pedantic it doesn't seem unfair to criticise it on that basis.
>> "IoT devices still speak RS-232"
> This is actually dangerously untrue!
They very much do. Maybe you are thinking of modern low voltage devices? Where yeah you do not use that as much. But modbus and rs485/rs232 are very much alive in the industrial world where IoT is quite alive.
> But modern modems aren't connected over serial.
They are. Just not necessarily what you consider RS232. Usually when I did this a few years ago it was USB lines. Which is a serial bus.
The AT commands are still used in pretty much all modems. Which you have in your pocket. That is how the cell modem sets up the call to the phone network. The venerable ATDT is still used all the time to setup the call to the network. The AT command set was for setting up the call and controlling aspects of the call (like max speed and protocols you allow). The auto negotiation is one layer out at the network/modem level. With older modems they had a speaker that let you hear it. It was that distinct tweeting with a buzz with a fade out. You just no longer hear it. AT did not do that at all other than kick off the process and setup the registers for the modem to know what to do at the negotiation step which is defined in the spec. Still all there.
Electronics guy here. There are devices that speak actual RS-232 with e.g. -3 and 15V as logic levels. And there are devices that also say "RS-232" on their port, but expect logic levels of say 0 and 3.3V (or 5V).
When you use one with the other any number of things can happen, including the destruction of the target microcontroller if the circuit is unprotected (most MCUs dislike sufficiently negative voltages on their ports).
Now if we talk about servers or expensive media gear their RS-232 connectors are very likely top notch (and still very much present as you rightly noticed). Cheap devices are more questionable here.
Yes, the AT command set is used in modern modems. But they didn't choose to use it because of the line speed synchronisation properties.
(Edit: and while the AT command set is supported, modern systems tend to use custom command sets in preference. For instance, Qualcomm modems support QMI which provides a binary interface to the modem and modern phones will tend to use that in preference)
> "If you know what UTF-8 is (and you should) every ASCII file is correct UTF-8 as well."
>
> I mean kind of? ASCII is a character set, it doesn't define the on-disk representation.
Excuse me for being pedantic as well but UTF-8 also doesn't define the on-disk representation and is just a character encoding of the Unicode character set. But they define bit representation and for that the statement is true.
> An ASCII file saved with each character packed into 7 bits isn't going to be valid UTF-8 without some manipulation. This is just an odd claim to make given the earlier discussion of varied word lengths and transfer formats that weren't 8-bit clean.
How these character sets are transported over other protocols is outside of the scope of their definitions.
ASCII defines a 7-bit character set. It's modern convention to encode that with each character as an 8-bit value with the top bit clear, but there's no inherent reason for ASCII to be represented that way, and a single counterexample disproves the assertion that all ASCII files are valid UTF-8
No that’s not a modern convention. You could argue that any higher order bit beyond the 7th bit is undefined by the specification but the order if the bits is clearly defined.
Anything with a defined 8th bit would be an extension of ASCII.
But it was obvious back then that if you use 8 bits for a character that the 8th bit would have to be unset unless some further processing was happening.
Even more so in the context of UTF-8. 8 bits bytes had won by then and in that environment ASCII was a thorough sub set of UTF-8.
Elsewhere in the thread is an example of a system setting the top bit when storing ASCII in 8-bit files. But storing it as an 8-bit encoding is also a convention rather than a requirement, and any file with an alternative encoding would not be valid UTF-8.
Something I noticed in that document (unrelated to the topic of the RFC) was an alternative way of listing hex values as decimal numbers separated by "/". I really like that (I always struggle with converting A to F in my head). I guess it didn't catch on.
>They developed hardware optimized to run FORTRAN, including machine instructions that directly implemented FORTRAN's distinctive 3-way branch operation [=Arithmetic IF]
Hahaha oh wow. That's amazing. Fun fact:
Arithmetic IF makes handling exit codes significantly less annoying¹, and that's because Arithmetic IF is how we came to get signed integers as exit codes in the first place.
¹at least in theory, in practice half the people implement the codes incorrectly because they don't realize this, and so…
Apple II as well. When displaying text, bit 7 determined if the text was flashing (0) or normal (1). (There was also inverse with bit 7 and bit 6 clear). Since bit 7 had to be set to display normally, some text editors saved their ASCII text files with bit 7 set.
It’s funny how you describe it as “ASCII with the top bit set”. As in, by default ASCII doesn’t have the top bit set. You can have malformed Unicode too. There are plenty of invalid ranges defined in the UTF-8 spec, does that mean that UTF-8 is just a convention and those malformed pieces are counterexamples to the convention?
> But modern modems aren't connected over serial. There's no concept of line speed. This property is entirely irrelevant to modern hardware.
Says who? I'm just commenting here and on second screen I'm debugging a microcontroller connected over serial link to a LTE cat M1 modem, that's a pretty modern one. It uses 115200 8n1 line by default, but you can change that speed, it even has speed autodetection, after modem signals that it's ready on separate line, you can send it "AT" at your preferred speed until it responds (typically after third AT).
My apologies, you're right (and I say this as someone who has REed devices with an LTE modem connected over serial, so I really should have known better). But I would certainly still assert that the choice of the AT prefix was for compatibility rather than for line speed detection.
> "But in 2005 Linus Torvalds invented git, which would fairly rapidly obsolesce all previous version-control systems"
> (cries in Perforce)
> Git was a huge improvement over CVS and SVN. Claiming that it rendered everything that came before it obsolescent just suggests massive ignorance of chunks of the software industry.
But this article is for common knowledge. It explicitly says that it does not deal with all parts of the software industry just parts that any hacker would know.
So yes now git is the only version control used for distributed collaboration of source code.
Yep. And also because we work with a lot of binary data which can't be merged in any sand way. Being able to lock a binary file youre working with is a necessity.
Fossil is used by SQLite (who created it) and until recently Mercurial was used by Facebook but that's about it.
Aside from Facebook and Google who have their own custom VCS's the only real games in town are Git and Perforce. Which is a bit of a shame because Git is really bad at some things. Especially large file support (hence Perforce continuing to exist), submodules, and partial clones.
> I mean kind of? ASCII is a character set, it doesn't define the on-disk representation. An ASCII file saved with each character packed into 7 bits isn't going to be valid UTF-8 without some manipulation. This is just an odd claim to make given the earlier discussion of varied word lengths and transfer formats that weren't 8-bit clean.
This seems incredibly pedantic to me. I think in context we can assume it is meant, ascii with 8-bit bytes.
> ASCII is a character set, it doesn't define the on-disk representation
ASCII is an encoding, and sometimes people refer to the ASCII charset as "ASCII" too. So it does define on-disk representation. A file with the leading 0s removed is no longer ascii-encoded. Similarly, a gzipped ASCII file is also no longer an ASCII file
RFC 20 [0] defines an 8-bit "ASCII Format for Network Interchange". The older documents are careful to call it "network ASCII" or "ASCII-8" (or the related "NVT-ASCII" for TELNET), but some newer documents (e.g., [1], [2], or [3]) abbreviate the name to "ASCII" in the context of network interchange. (Though I don't mean to refute that "ASCII" or "7-bit ASCII" or especially "US-ASCII" can refer to rhe 7-bit codeset, only to note that the unqualified name has been overloaded to refer to 8-bit ASCII as well.)
ASCII is standardized in (latest version) ANSI X3.4-1986, not by an internet RFC. That standard did not define an 8-bit encoding of ASCII, only a 7-bit encoding. That the IETF later called an 8-bit encoding ASCII just causes confusion. IMO they should have called it ISCII, for IETF Standard Code for Information Interchange.
The 8-bit "ASCII Format for Network Interchange" (generally called "network ASCII" or "net-ASCII" or similar in older RFCs) is defined in the very first paragraph of RFC 20:
> For concreteness, we suggest the use of standard 7-bit ASCII embedded in an 8 bit byte whose high order bit is always 0.
Meanwhile, ANSI X3.4-1986 defines the "7-Bit American National Standard Code for Information Interchange (7-Bit ASCII)". Even the older standard calls it the "USA Standard Code for Information Interchange", i.e., "US-ASCII".
None of these standards define "ASCII". It's up to us to interpret "ASCII" as "7-bit ASCII" or "US-ASCII" or "network ASCII" or any other form of ASCII according to context; there is no monopoly on the unqualified term.
I'd argue that the need to provide a specification for ASCII as an 8-bit format is pretty strong evidence of the existence of ASCII in other formats :)
Edit: to clarify, there's a spec for ASCII-as-8-bit-with-top-bit-clear, but it came some time after the ASCII spec and if someone needed to define that later it strongly suggests some people were doing it differently
Well, it's by ESR so the smug pedantry is taken as read. It may be stuff every hacker used to know (and I'm old enough to have dealt with everything there other than the 36-bit stuff) but so what? There's a ton of stuff going on these days I'm too old to care about and don't know but younger hackers probably have forgotten they ever had to learn it.
Modems are still connected via serial lines - our embedded project has a modem connected to the MCU via a UART. We ran out of MCU pins for HW flow control but it's essentially the same as how we did things in the 80s with external modems.
I assumed AT commands were still in use either because they're so easy to use or they're easy to test. I can't say whether the autobaud feature is still useful but the modem we use can do it.
I think (some) IoT devices can still use RS-485 which seems similar to RS-232. We're in ag-tech so the sensors we talk to use SDI-12, a shared bus serial protocol with 12v power and 5v signalling. Lots of weird stuff out there!
Yeah. I trust younger hackers — the smart ones - to invest time in what is useful and/or interesting to them. This stuff isn’t terribly useful (in comparison to all the other things out there to learn). Whether it’s interesting depends on your taste. I will add that older folks - myself included - have their idea of what’s interesting colored by a lot of nostalgia.
Be suspicious of anything with the tone of “kids nowadays…”. (Isn’t there an xkcd comic about that?) The smart young hackers will do fine, as they always have. I see teenagers doing wonderful things with x86 assembly on hacker news on a regular basis.
I love history; I trained as a historian. I just don’t think it’s always all that useful in our field, especially if it’s old technical trivia. (Social and economic history may be a different matter.) Sometimes it’s tangentially relevant. Mostly young engineers would be better off investing in learning timeless concepts, such as relational databases and SQL, networking, low level computer architecture, distributed systems theory, and so on. And they will (again, if they’re smart), because those things are useful and deeply interesting and will remain so thirty years from now. Get the foundations down and you will draw on that knowledge for your entire career.
> ASCII is a character set, it doesn't define the on-disk representation.
ASCII is both a character set and an encoding. The encoding specifies 8 bits per character with the high bit clear. Every file encoded this way is also valid UTF-8. That's what the article's statement meant.
56K really was the maximum for telephone modems. This is because the telephone network, internally, transmitted 8 bit samples at 8K samples/sec - 64Kbps - except the occasional low bit got stolen for other signaling, so the net data rate was a bit less (and the result was inaudible to a telephone customer). Instead of trying to synchronize the modem to the stolen bits, it was simpler to just use the 7 that were reliable - thus 56Kbps. https://en.wikipedia.org/wiki/Robbed-bit_signaling
9600bps as the defacto terminal speed was older than 9600bps modems (those were the cat's pajamas in the mid-late 80s or so). It was simply "fast enough" for a terminal user and "slow enough" for the terminals of the day. I had one (a Volker-Craig 4404 if I remember correctly) that could not keep up at 9600bps and had to be used at something less, maybe 4800bps.
The reason 9600bps was The Speed for terminals is that with 10-bit framing (start/byte/stop) you could send an entire 80×24 screen in two seconds, while not overloading the (fractional-MIPS) host with interrupts or DMA when doing so for many users.
Typically a minicomputer terminal multiplexer batched transmission when possible, so it only had to issue an “all done” interrupt to the CPU when it was done sending a batch of characters it pulled from memory via DMA. Similarly, in character mode it only had to issue a “something’s waiting” interrupt to the CPU when a receive buffer was full or a sequencing character (like carriage return) was received.
This is also why so many mainframes and minicomputers used block mode terminals: Applications use a forms API and all editing/filling happens on the terminal: The computer transmits a full form to the terminal, the user fills in the form including correcting mistakes and possibly even sees some validation (eg numeric versus alphanumeric fields), the field contents are sent to the computer upon an explicit “done” action.
Even for “fully” interactive use many such systems preferred to use a line mode, where the current line contents are only sent upon specific actions, and line editing takes place on the terminal. UNIX style “raw” mode was too high-overhead for the number of users that systems needed to service, even if the terminals cost a bit more.
Even with a 56k modem until broadband became widely available many of us were stuck with a 28.8 connection, rural phone lines could rarely handle 56 reliably.
I heard Shoemaker or Levy once talk about the modem on Voyager...running at a blistering 10 bps. I keep our old 2400 bps modem around as a curio. The memories are interesting and fun for me, but I get that they may not be so much for some of the other engineers I work with.
My first modem was 1200 baud! This was back in 1988, I think.
I remember when I upgraded to 9600 in the early 90's. It was an incredible upgrade. I also ran my first SLIP connection on that machine (an Amiga), probably around 1993 or 94.
The 4800bps was directly attached to a computer, no modem. If both were set to 9600, the terminal would miss characters. But most dumb terminals were fast enough to keep up at 9600.
This use of AT commands in 4g modems was really something that made me laugh out loud when I started dabbling with those devices. The muscle memory still worked. The lack of dialing and modem sound was a bit of let-down, though.
It is astounding that the 3g/4g modem industry did not come up with a better way to talk to their devices. Even ISDN had a widely implemented proper API.
LOL. Just to add on to the comment, it doesn't even have to be 3g/4g. If you're doing Wifi on an ESP8266, you're probably also using AT commands as well. Although it's really nothing like the Hayes command set.
In any case, it fills me with equal parts nostalgia, awe, and fear. :)
After using AT - compatible command set on a wifi iot module, AT commands in iridium don't surprise me, it's almost counting as legacy now. It started development in 1993 (30 years ago).
Most 3G/4G modems these days are USB connected and have QMI (for quectel chipsets) and / or MBIM both of which are easier to use from software than AT.
They still have a virtual comm port that talks AT though and that may still be required for a few diagnostic things.
Setting up a data connection over QMI / MBIM however is much easier and more efficient than messing around with AT + PPP
The one that made me laugh (or cry, I don't remember) was when I issued an AT command to do an HTTPS download of a binary file over a cell modem. I get wanting to keep using a command set that was familiar, but they stretched it beyond all recognition.
I understand that there were good reasons why it was done. But out-of-band control is so much nicer to work with. One channel for control, one for data.
No parsing of text. No escaping of '+++' to avoid that a '+++ATH' in the data stream closes the connection (I wonder how the download time of a file containing only '+' differs from a file with other characters).
And with increasing mobile network speeds, you really want something like DMA instead of reading individual bytes from a serial interface.
Modern modems have "buffer access mode" for this. When you need to receive data, modem notifies you with a message '+QIURC: "recv",<connection>,<len>', then you send 'AT+QIRD=<conn>,<len>' and you know next <len> bytes are data, without any parsing.
> Thus, even if the file being transmitted [...] includes occurrences of the escape command string of bits, it is extremely unlikely that any random occurrence of the escape command would occur unintentionally in the environment of the entire escape sequence, that is, the escape command string surrounded by a second of no data on either side.
I don't know what the modern commands look like, but I hope that at least they abolished the in-band signalling so you don't have to do <wait 1 second> +++ <wait 1 second> .
For newest Quectel BG95 LTE Cat M1 you can either use buffer access mode (AT+QIRD for receiving data) or still use that in-band signalling (direct push mode) with +++. You could also use external pin for exiting from that direct mode, but +++ is still available and useable when you don't have any more free pins on your MCU.
> Google Groups is where you can find what has been preserved of the historical USENET archives
I recommend archive.org instead. There are many great collections there with USENET groups saved as mbox files that you can download. Install something like mboxgrep, import into a mail reader, or just read the files as text.
Mid 1980s, trying to connect from one London University computer (at Imperial, a CDC Cyber?) to another (at Queen Mary?), to try some symbolic algebra package that was only available on a Unix minicomputer: spent a good 2 hours just finding the right terminal settings and configuration settings for all the intervening software. Left me feeling I would rather code what I could on an 8-bit 2Mhz 64K machine.
Yep - those were the days when there was a pretty good chance that people doing Uk computer science degrees had a working knowledge of Z80 or 6502 machine code from hacking Elite/Jet Set Willy/Manic Miner on their ZX Spectrums/BBC Micros...
Software has a "built on top of" problem. When you're doing construction, you can either re-use an existing building and all its existing components, or upgrade a few components inside the building, or you can knock it down and build from scratch. Software doesn't do this. Instead it takes the lead pipes, CAT-3 wires, horsehair insulation, and single-glazed windows, and installs them in a new skyscraper. We keep reusing and building on top of shitty old tech from decades ago because it's easier than inventing something better (PVC, CAT-6, blown insulation, triple-glazing), or we're avoiding making some old standard obsolete because some old buildings still use it and the landlords don't wanna pay for upgrades. So we get a working skyscraper, but for some reason we still need to train new engineers on horsehair insulation.
People try to build all new, better software tech all the time! But doing so breaks backwards compatibility with existing stuff, which greatly hinders adoption. So it mostly remains relegated to experimental research projects, and if you're lucky, the best parts of it get incorporated into existing projects gradually over time.
Probably where the "new tech needs to be several times as good as the old stuff if it want to replace it" to overcome the momentum and resistance to change.
I love how most of the comments here seem to be from people who don't think there is anything valuable to learn from the previous 60+ years of computer evolution.
What you're missing is that the reason WHY lots of this "shitty old tech" is still around is because it has stood the test of time. Nobody keeps all of this around in modern technology out of pure nostalgia. This stuff is still useful because it largely solves a problem elegantly, and/or provides a useful abstraction with more modern systems. Or because there's simply no escaping the need for compatibility with existing systems.
Yes this "shitty old tech" has warts. ALL of your code will in a few decades too.
Id argue a counterpoint to this, and its that replacing things like software are expensive, and time consuming. Take for example a CNC machine. The whole hog is a pricey piece, and you cant swap it out to some competitor without a lot of work, usually (in this case) in the way of all the program files that have to be retranslated, operators retrained, etc. AND THEN you add to that the software itself. Some of these machines have a "more modern" (read: at least from 1990) computer. Which while great in some aspects, the software still cannot be upgraded, due to compatibility. And so, these machines continue to put thousands of operating hours on the clock. The thing is, the software is usually pretty bad. Operators have just gotten used to the bad, and work around it. And its only gotten worse as time goes on. Some ancient windows os, needing to connect to a newer version as the rest of the business modernized. And worse, that ancient os, still having some network connectivity in some cases.
Its not that the tech is great, its that its become so engrained into the business that replacing it would be harder than starting a new business with the new tech. And so, it continues to operate well beyond its lifetime, and thats how you end up having cobol and fortran being actively used in industries in 2023.
And.. lets talk about the modern standards *cough* USB, but wait did i mean USB 2.0, or maybe 3.1, but maybe i meant 3.2+superspeed, but wait theres also USB 4.1! Old standards havent been entirely great, and that's why we have iterations (my usb example was really a rant more than example), but we have unfortunately built a lot of bad standards, that we still very heavily rely on, and wont easily get away from. The overhead of TCP for example, or how about IPv4?
I blame business though, we have much higher velocities in modern development than we did 50 years ago. Having years to plan a thing vs two weeks it seems to be today. If given a good time frame, software and standards can be more concretely designed, leading to less shitty new tech, but im a dreamer and it will never happen except in small scale areas.
As someone born in 1997 and working in the software industry, one thing I’ve learned is that the discipline of software engineering is sometimes mostly about understanding the history of a system. Love reading stuff like this because sometimes it really makes some concept click!
It's funny, I work in embedded systems and much of this is still relevant to me. RS-232 itself (as in the specification for the physical layer) isn't used as much, but the underlying UART comms protocol is still alive and well. A Beaglebone dev board, for example, can be booted by uploading the bootloader over XModem.
And a lot of smaller microcontrollers still use a UART port and serial protocol to reflash the device. It's the lowest common denominator for hardware interface and the simplest thing that works.
Hah, to me flashing an MCU over UART is a luxury. When I first started working with MCUs I needed to use a chip programmer that cost several hundred $ and flashed using a high-voltage (relatively speaking) parallel interface. Then there were serial programmers that used a proprietary protocol to talk to the chip. And then finally we had bootloaders and self-flashing MCUs that made this whole process sane and affordable!
And of course to our ancestors, we're spoiled kids with our electronically erasable ROM ;)
I remember being blown away as a modern teen seeing an old EPROM being erased and how to do it. It felt more hacker like than todays "just push a button"
Actually, there was an even older style of tty interface derived from telegraph circuits and called "current loop" that the ASR-33 originally used; in the 1970s dual-mode ASR-33s that could also speak RS-232 began to ship, and RS-232 eventually replaced current loop entirely.
Current loop is used where the length of the cable could be huge. If you put in 20mA at one end, you know you will get it out at the other irrespective of length, which is not true of voltage.
> FF (Form Feed) = Ctrl-L […] Many VDTs interpreted this as a "clear screen" instruction. Software terminal emulators sometimes still do.
This was commonly used on Usenet as a spoiler tag, because the rest of the message wouldn’t be displayed until the user pressed PgDn (or equivalent) past it. It was visible as “^L” on the screen, so you knew there was more coming up, and the rest of the screen was cleared, until the ^L scrolled out of the screen.
I'm mentoring a couple guys on my team, both born solidly after the year 2000 by their own anecdotes, building them up as Linux infrastructure engineers. I covet the video cameras we use on our remote meetings, because I can tell when to stop talking about the history by their body language drooping.
I've had a few a-hah moments, using connections to the same history ESR scribbled above, but largely I am finding that regaling the younger people with details from before they were born is not interesting because they were not here for it and have no connection to it.
Old geezer suggestion: Try to tell them stories about people and the situations they were in - with the old hardware and tech as (at most) props and costumes. If they're that young, and not too weak in the social skills, they should pick up pretty soon that you're mostly teaching 'em about corporate culture, handling managers, gracefully dealing with sudden problems, etc.
> but largely I am finding that regaling the younger people with details from before they were born is not interesting because they were not here for it and have no connection to it.
I don't understand why this attitude is so pervasive nowadays. in my opinion, curiosity about one's craft's history is a crucial part of the hacker spirit. I can't imagine, for example, never having used a landline, yet not finding phreaking to be incredibly interesting to learn more about, even though it's about as far from practical knowledge today as it could be.
did we mess up somewhere and fail to convey this aspect of the hacker spirit to the youth? or are things just changing too rapidly?
I think things are more competitive and fast-paced nowadays than when we entered this space and so history and all of the details that come along with it aren't necessarily their first priority. I don't blame them.
For example,
When I was first getting into computers I felt like I had all the time in the world to perfect writing batch files and I could pour over a single magazine at a leisurely pace for weeks until the next one came out.
Nowadays every time you use your face to unlock your magic pane of glass there's a language, a new framework, a new more impressive AI you can build your business on or some creepy guy trying to scan everyone's eyeballs, it's just different.
I think there's also an aspect of computer programming having become more of a fashion than a craft, in many cases. game development used to be very difficult and people who worked in that field pushed the limits of what hardware and software could do. nowadays, free-to-download general-purpose engines make getting into "game development" extremely easy. web development became its own whole thing by means of frameworks and so forth, where now to get into it, you're more learning how specific frameworks and such work, rather than underlying engineering principles.
how many people who program computers professionally for big tech companies are "hackers", these days, compared to one decade ago? two? three? surely the ratio has only decreased over time, as more and more people get into the field for purely pragmatic reasons (income, clout, etc.) than taking part in/engaging with "the hacker spirit". for better or worse, gates have not been kept, and now things are completely different than they once were.
30+ years ago I was programming for companies doing business software - ERP etc, and there were very few 'hackers' in the ranks. These guys and gals must have enjoyed programming at some point but I guess the software was so boring it just leached the joy out of it.
Certainly by the mid-late 80s there were far more day job programmers around than hackers.
In one company I managed to find a few fellow hackers, but the next one I worked at - nothing. It was awful.
Luckily after that I managed to get more interesting work for a while, but then I spent too long in the wrong part of Oracle and again, I just couldn't find fun in the coding we were doing and no-one else could either.
I think true hackers or enthusiasts have always been in the minority, at least since the late 70s I'd guess.
And in my opinion, the most crucial part of the hacker spirit is ESR's own edict: "No problem should have to be solved twice."
So I would convey the exact opposite aspect of the hacker spirit to the youth. Don't feel the need to read anything about history that isn't directly related to what you want to hack on. Black box liberally and dangerously and deal with the consequences as they appear. If reading history is what you want to do with your spare time, great, but let's be clear about what the actual heart of this avocation is.
I mean if you go for OSCP you end up in a virtual environment which many done before you (some with success). It is up to you if and how you approach such puzzle. And if your itch is writing X in Rust or Y in Python, why not? It is your choice... but you don't have to.
> did we mess up somewhere and fail to convey this aspect of the hacker spirit to the youth? or are things just changing too rapidly?
I think that young people today don't perceive themselves as having any free time, and so they treat new things as they're taught in school: optimize the solution, get your grade/get paid, and go back to the precious little free time you have available to enjoy your life.
The disappearing free time is partially true, as the reach of the school environment grows and homework/extracurricular burdens increase. It's also partially imagined, because most young people burn 4-9 hours per day on their phones. It's not their fault, since phones are designed to be addicting for young people specifically. However, the end result is that the class of people who should be the most restless, virile, excited, and driven in society--kids age 12-20--is the most over-scheduled and exhausted.
If you took a modern young person and stuck them on a reservation where the level of technology and pace of life matched the '80s and '90s, they'd start behaving like young people again.
In addition to how many distractions there are now - youtube & stream services, and much better games than in the 80s, the barrier to entry is so much higher.
I had a C64 and only the docs that came with it, until I eventually found a couple of books on assembly, one not even for the C64. But even with that scarcity of info I was just desperate to know how it worked and how to make it do things.
But look at how complex a computer is today! Even though there is endless info for free, it's just too difficult to get something fun going.
Early PCs were still understandable, but todays GPUs etc are just a different world, and expectations are sky high when you see the current gen games. Even an 'indy pixel game' is no laughing matter to create.
it is very strange. i think the truth is that most people are just there for the money. they chose a college major not because they found it interesting, but because they were told that it pays a lot. i was born in 1996 and have been working as a software engineer for a little over 4 years. i got a degree in CompSci because i was always interested in it (first programming experience in 6th grade, before then i was familiarizing myself with other things i could do with the computer, like record and edit audio/video, make frame-by-frame animations).
i find the history very interesting and almost vital to actually understanding some things. i have a colleague in his 60s who likes to ramble on and on about history of computers and programming and the like, and i am grateful for it.
I learned about punch cards, when I discovered my high school in Scotland, in the 1980's, had card readers that were lying on a shelf unused for years.
It was the Physics teacher who ran a computer club I'd joined in the school.
We got to use those old HP "calculators" which were big bulky machines which had a display consisting of a single line of I think, 80 character LEDs, ran a form of BASIC, had a built in cassette tape reader (and writer) to load or store programs.
The card reader attached to these clunky big things. I asked the physics teacher why they weren't used - I remember him saying it was because they weren't working. My 13 year old mind kind of fixated on them; the cards were in the same format as punched cards, except you marked them by drawing a single diagonal line across one of the rectangles which represented an ASCII character; each card had an array of such characters and you wrote a complete BASIC program by correctly marking each card. It took many such cards to write a program.
Long story short, I got the card readers working again. The physics teacher just nodded his head - he was one of the old-skool kinda guys with a beard, who showed they were impressed, by displaying such stoic reactions. It was at that point that I learned about punched cards; basically the same principle but using punched holes rather than marking the individual rectangles.
As I was born in 1969, I feel I had the priviledge of growing up during the various eras the article speaks of; by the time I attended high school I was growing up with Sinclair ZX81s (My first home computer), VIC-20s, C= 64s, Amiga 500s (My first Amiga). Amiga 1200s. My first PC was some 386 thingy I bought from a pal. This article reminds me of a lot of common knowledge I'd completely forgotten over the ensuing decades - such nostalgia!
There used to be a fairly standard history that started with Jacquard loom punched cards, moved along to Hollerith's census processor, then through the early electronic versions of same, perhaps via a flashback to the telegraph. Then the first valve-based stored program computers after WWII ("incredibly unreliable"), and the transistor revolution which made reliable computing possible, via punched cards and DECtape. Then command-line timesharing, graphic systems, workstations, GUIs, and the rest, through to the 90s.
It felt like a very logical and connected progression. Anyone in the business would be familiar with it, if only in overview.
The UK had a preview of commodity computing with its cheap 8-bit micro scene in the early 80s. In the US computers remained far too expensive to be widely used.
That seemed to change in the late 90s. Computing came to mean desktop and eventually handheld devices connected to the Internet. There were - still are - supercomputers, but they're a tiny niche.
It became about commodity hardware appliances for commodity users with commodity software and tooling. The software also became appliance-ised.
In a sense computing stopped having a history, in the same way other consumer goods like fridges, cars, and microwaves don't have a history. (Of course they do, but no one thinks about it while using them or buying them.)
It was a huge and under-noted cultural shift.
The next shift - AI - is already happening, and I'll guess we'll see the same lack of interest in "manual" computing history from the 1990s to the 2020s when AI-assisted computing becomes mundane and pervasive.
Where is that history published? I never encountered the full thing, though I picked it up in bits and pieces. Is there a serious, well-written history someplace?
> (Of course they do, but no one thinks about it while using them or buying them.)
Wrong crowd! :) Every time I see a microwave, I think of the chocolate-melting-in-pocket story. Cars have endless associations. Fridges - I wonder how efficient ice boxes were, how long they stayed cold, etc. (obviously the convenience of having the 'cold' delivered over powerlines is hard to beat).
All well before my time, but still fascinating to read and learn about. Even when I was in my teens, reading in Steven Levy's "Hackers" about the Tech Model Railroad Club at MIT and the genesis of modern hacking culture was addictive to me. Being uninterested in history is a fault of the individual, not necessarily an age concern.
I was born in 1991 and my elementary school growing up had Apple ][s and in middle school (2002–2005) we learned to use Hypercard in computer class—so my baseline level of knowledge about computers from before I was born, before self-directed learning came into play, is more than kids these days, at least.
Apple used to be heavily invested in getting its computers into public schools in the US, kind of like how Google is with Chromebooks now. we didn't get candy-colored iMacs until I was in like 4th grade.
Seems like claiming studying history (from before one is born) shouldn't be allowed. Some very nice music I learned from my parents, and a great series (The Prisoner).
I suppose they are at work to work, and not to listen to you babble on about what you find interesting. Every minute you talk about your 9600 baud modem is either a minute later they need to work, or a minute less work they get done that day. Nobody needs to know any of this to run a k8s cluster, or whatever.
The ASCII table on Wikipedia (from the military standard) is even better at showing the internal structure: https://en.wikipedia.org/wiki/ASCII
Maybe one could make the perfect ASCII table by grouping the bits as (2, 5) rather than (3, 4) - then you can see the "shift key clears bit 6, control key clears bits 6 and 7" principle for the letter keys.
I am a fan of the 4-column style, which what you are saying, separating into 2 bits & 5. I think it's alot more intuitive to see the modifiers in action
> don’t know the bit structure of ASCII and the meaning of the odder control characters in it.
If it was up to me, remove the more useless old codes that take up precious 1-byte UTF-8 codes and replace them by common characters. Like "Record Separator": if it's that useful to have a record separator character, why aren't we using this one instead of e.g. commas for comma separated values?
I find that the degree symbol (°) is a glaring omission from ASCII.
> Like "Record Separator": if it's that useful to have a record separator character, why aren't we using this one instead of e.g. commas for comma separated values?
I've done ETLs to/from systems that do use these control characters. It's a joy compared to CSV. I have nothing to escape and no complex parsing logic. Embedded CR/LF-- no problem. Fields containing commas-- no problem.
We should be using these control codes for their purpose but nobody knows about them anymore.
I love using FS and RS in my shell scripts, esp. when I'm processing text data export from a database. As long as the data doesn't include binary data (such as images), I can be pretty certain that the data doesn't include FS and RS characters since they don't appear on a keyboard -- therefore I can preserve things like line breaks in text fields, and don't have worry about if someone inserted a " | " character in the contents of the data.
Of course a pre-pass is to strip out FS / RS just to make sure in case it got in accidentally, and to also know the purpose of the data to ensure that they shouldn't be in the text. But so far that has made my scripts a lot more reliable. The other alternative is to do the light-weight processing using a heaver scripting language that can deal with structured data natively, but setting FS and RS is often times a bit more expedient for me.
It pains me greatly that Hive still can't ingest FS/RS-separated (or \001/\002-separated) data nor does it correctly handle CSV because someone hardcoded \n as the record separator so deep they can't make it configurable.
The section on hardware context reminds me of what is probably a little-known fact these days -- that Oregon Trail (amongst many other games[0]) was originally designed as a teletype game.
> Eventually ftp was mostly subsumed by web browsers speaking the FTP protocol themselves. This is why you may occasionally still see URLs with an "ftp:" service prefix; this informs the browser that it should expect to speak to an FTP server rather than an HTTP/HTTPS server.
In the earlier days of the web, it was not totally uncommon to see people hosting their home pages on an FTP server rather than an HTTP server. Netscape and IE both spoke FTP just fine, and for some people, access to an FTP account was more convenient than paying for separate web hosting. Or settling for GeoCities. If the page had a lot of images, it was quite a bit slower than HTTP though.
Firefox supported FTP until a few months ago. Not sure which version though, but it is fresh in memory: I had some such URLs in my history that couldn't be displayed anymore
how to convert between the above 3, and between them and decimal
bit twiddling
the difference between ASCII and binary files (even pre-unicode)
the fact that a source code program compiled on one processor architecture will not run on another without recompilation on the target, apart from cross-compilation - at least in most cases this was true, though there might be differences nowadays, for apple products, java and other bytecode, etc.
endianness
...
I have actually talk to various people who did not know all of the above topics, although not necessarily all in the same person
This one is a bit confusing. In DOS/Windows land, files are just files, always binary. The distinction between "ASCII" and "Binary" only exists at the time you call "fopen" (a C function), then failing to open in binary mode causes it to mangle your data by inserting CR before every LF.
Not all operating systems have file systems that take the Unixoid "a file is just a hunk of bytes on disk with a name" (which convention DOS, Windows, and Mac also use). Some file systems do distinguish between text and binary files; others have fixed record sizes, or provide fixed-record-size files in addition to random-access hunks of bytes. Mainframe and DEC mini operating systems come to mind.
I meant text files vs. binary files, as in, you can open text files in a text editor and see their contents, but if you open a binary file in a text editor, you get what appears to be garbage, but that is not an error, because of all the non-printable characters they often contain.
But at one level you are right, of course, files are just files. It is the higher layers of software and humans that give different interpretations to them, such as text vs binary.
I still remember my google buddy talking about being aghast his intern (who was a CS major who obviously passed with flying colors and went to stanford or yale, I dont remember) not knowing what an IP address was.
I've conducted interviews of professed Unix devs or sysadmins, some years ago, who didn't know what the setuid bit was, or confused it with the sticky bit.
Heh. These days, a computer science degree is essentially equivalent to a coding degree. But depending on the decade and curriculum, it was traditionally a branch of mathematics and one could theoretically go all the way to PhD while never actually touching a practical computer.
There used to be a saying: Computer science is no more about computers than astronomy is about telescopes.
And that is why in this article, ESR has titled it, Things Every _Hacker_ Once Knew. Hackers usually have some sort of passion for the technology itself and want to understand ALL of the practical details and history of a thing. Hackers are _very much_ about computers.
Something I always thought was so neat about ASCII was that upper to lowercase was just a flip of the sixth bit away. Now I'm curious was that process is for Unicode or more specifically UTF-8. Is it all table look ups?
Outside of English, upper and lower case isn't that simple.
Consider German. Lower case letter "ß" was converted to two upper case letters, "SS" or "SZ", until fairly recently, when a new character "ẞ" was officially added. Much software still uppercases "ß" to "SS". For example:
$ python3 -c 'print("große".upper())'
GROSSE
Consider Turkish. Upper case letter "I" converts to lower case "i" in English, but in Turkish the same letter converts to lower case "ı" (no dot). Lower case letter "i" converts to upper case "I" in English, but in Turkish the same letter converts to upper case "İ" (with a dot).
So you have to know which language you're using. It's not enough to know the character.
Yes, but even in the era of ANSI code pages, it was that way for many "accented" characters and non Latin alphabets. And you still needed a little more in ASCII than blindly flipping bits or you'd turn your numerals into control characters.
I worked with data acquisition systems around 1998. RS-232 was still king (though USB was quickly displacing it), using DB-9 connectors... I always assumed DB-25 had always been for printer's parallel connectors :D would be really confusing to get RS-232 with one of those for me.
I think the author doesn't mention there were other protocols as well, like RS-485 which had a much greater range in terms of distance (and I think it also used DB-9 though if you connected it to the wrong protocol, one of the sides would definitely go up in smoke :D ).
We had WYSE dumb-terminals with orange plasma displays in my high school, with an empty slot to upgrade them to smart terminals. The keyboards were very reliable. They saw a lot of use for "WordPerfect 1.0 for UNIX".
The most unfortunate thing was that if you didn't stagger logins, the poor little minicomputer they talked to would slow to a crawl. Try getting 40 people to sign in at once, and the mini would became unresponsive for all or most of a 45 minute class period. But groups of five, and you could get ~50 people signed-in in few minutes.
Actually before the IBM PC the parallel printers used 36-pin (Centronics) connectors and the serial RS-232 interfaces used 25-pin DB-25 connectors.
IBM decided to use cheaper and smaller connectors than in the standards, so they replaced DB-25 with DB-9 (in PC/AT) and the 36-pin printer connector with a DB-25 with inverted gender, to avoid confusion with RS-232.
Due to the importance of the IBM PC, these smaller connectors have become de facto standards.
Dell had a number of personal computers around the 95'ish era that power supplies with an ATX compatible connector. But it wasn't electrically compatible so if you swapped a regular PC power supply or vice versa you'd like the smoke out of a motherboard.
> For most hackers that transition took place within a few years of 1992 - perhaps somewhat earlier if you had access to then-expensive workstation hardware.
My library used these teletype machines into the late 90s, maybe even the early 2000s. Don't remember exactly when the transition was, but I remember using them in middle school or early highschool.
I actually remember thinking how clunky the web interface was when they upgraded, how you had to make sure the focus was in the text input box. It didn't "just work" like the older system.
>In fact, until well into the 1980s microcomputers ran slowly enough (and had poor enough RF shielding) that this was common knowledge: you could put an AM radio next to one and get a clue when it was doing something unusual, because either fundamentals or major subharmonics of the clock frequencies were in the 20Hz to 20KHz range of human audibility.
Oooh boy is that wrong. Even the IMSAI ran at 1 Mhz or better.
He's kinda wrong, but not when you take into account RF modulation. The first "useful" application for the Altair was having it play music by making it run various delay loops at different speeds that generated RF frequencies that could be picked up as tones on a nearby AM radio.
You could even faintly hear the CPU or the video chip working on old bitty boxes like the Commodore 64, VIC-20, or TI-99/4A because it would leak through to the audio portion of the RF signal to the television. VICE even has an option to emulate this for a more authentic feel of using an old Commodore machine.
In embedded serial UART protocols (which may or may not use the RS232 physical voltage levels - often not) are very much still alive.
When you bring up an embedded Linux device, be it based on an old or a brand new SoC the first thing to get running is the UART connection, first to the bootloader and then to the kernel.
Good old RS-232 - my first programming job was working for a company called Digiboard that made RS-232 concentrators for companies that were running hundreds of terminals over that protocol. You’d be astonished at how many terminals some of our clients had networked to a single machine over a serial port.
Speaking of this, maybe someday the line between volatile "memory" and non-volatile "storage" (currently HDD and flash) will cease to exist. Then we no longer have to ask if a file is loaded into memory or it's at rest in storage.
I don't understand the bit about the AM radio. Surely he must have meant something in the range of 0.5-1.5 MHz, which is the typical band for AM. I don't think a radio broadcast receiver could pick up an EM signal in the sub-20 kHz range.
If it had audio amplifier, it could. I remember when I could hear some radio station from simple computer speakers. The catch - transmitter was on the other side of the street, but if I remember correctly, it was FM station - how could a simple audio amplifier pick this up - I have no idea, but strange things can happen with harmonics. It was barely audible and very low quality though.
My PC speakers do that. I'm pretty sure that's the cable between the sound card and the amplifier acting as an antenna and picking up the broadcast as interference. Then it gets treated by the speaker system as if it was the actual audio signal. Here's the thing, though: either AM or FM, radio broadcasts are definitely not in the human hearing frequency range.
EDIT: Thinking about, maybe what he meant is not that the carrier signal is in the sub-20 kHz range, but rather that the switching happens at around that frequency. For example, gates opening and closing modulating the amplitude at audible frequencies.
I gave this as an example of what COULD be picked up. Audio-frequency EM (from "processors" running at measly hundreds of kHz, with actual operations at tens of kHz) picked up by crappy audio amplifier in some old radio - that could happen very easily.
By the mid-1990s, this was in the FM broadcast range, as a sysadmin I knew at the time demonstrated with an open-cased tower he was working on. (Perhaps perpetually.)
backspace is still used for formatting bold or accents in intermediate stages of man pages.
Note that SI, SO, ESC are ultimately defined by ISO 2022, which despite its common mindshare is about more than codepages. Properly supporting them was needed until "assume UTF-8" finally became common, which is very recent. Nethack is already mentioned in TFA ...
i actually liked working with rs232 back in the early 80s. give me my trusty breakout box, a soldering iron, my own hand-crafted terminal emulator and i was happy as larry. it was really hard to do anything destructively wrong - at worst the thing just wouldn't work. certainly a lot easier to deal with than things are today, should something go wrong.
I would still use RS-232 if I needed to connect something with it, since I still think it is good (although many of the messy things are unfortunate, but it also has many good features). If I designed a new computer, it would probably have RS-232.
I still use Usenet (and some other people still use it too), with NNTP. In fact, I started using Usenet only a few years ago. The Usenet provider named "Eternal September" is the one I currently use, and was named due to the 1993 mentioned there. I don't use Google Groups and all spam I have seen on Usenet these days comes from Google, and I hate it and some other people say similarly, that they would reject messages from Google.
I also used FidoNet, but that was longer ago, and it was with internet. (It might still be in use; I am unsure.)
Of course I know of FTP (I don't use it and I think that HTTP is much better), Gopher (I still sometimes use), etc.
Mostly ANSI terminal emulators are used today, although xterm also supports Tektronix emulation as well, although as far as I know nobody uses it these days (I have only used it once).
I know of the games before GUIs, and even still play some of those older games these days (and have written some).
Of course, I know what ASCII and UTF-8 are, although I mostly don't use UTF-8; I mostly use purely ASCII (and when needing non-ASCII, prefer character sets other than Unicode). Some of my (modern) designs do use the ASCII controls for similar meanings than their original uses. Actually, it is not only myself; even a mode in the SQLite command shell, and some other formats I have seen, are able to use the ASCII separator controls for those purposes. It is rare, but there are a few people who have made use of them.
Also, you will sometimes see notations such as "^H" and "^W" where strike through is unavailable. CTRL+G for bells is still used on terminal emulators at least what I use (and the NNTP client that I had written will use that to ring the bell once the download of a set of articles is complete). Actually, even sometimes when I play the GURPS over the computer, we will use the bell to notify the other side that we are ready.
And, of course CRLF will still be used when working with DOS programs (and, fortunately vi can be switched to CRLF or LF only mode, so this is useful when you are using it to edit DOS programs).
I have not used 36-bit computers nor punched cards, although I have used emulations of punched cards. I also dislike the syntax for octal literals in C (starting with "0o" instead of just "0" would be better, or starting with "8#" like in PostScript would be another way), although I do sometimes use octal (although it is rare; I use hexadecimal much more often, and so does everyone else).
I had to connect to a cisco router with a serial dongle (window->usb->serial->cisco) not that long ago. After successfully configuring, I could change configuration in the same exact way going over network (window->encryption->tcp->ethernet->decryption->cisco), but underneath, it's all teletypes talking to each other[0].
My $work has an R&D lab with hundreds of the enterprise/carrier network appliances we sell, and ALL of the automation for the whole thing happens over RS-232 serial.
(Something that can easily be true, even while he has done some small number of valuable things. Though I do claim this essay is not one of those things)
I remember once that I was at a Boston Perlmongers meetup, probably early January 2005, and commenting that someone needed to tell ESR that he wasn't God's gift to women.
Everyone laughed. I went on to share a cringy story of ESR hitting on my then-wife. Which inspired a woman present to speak up and share a story of ESR hitting on her. Which lead to the next and then next story, until every last woman in the room shared their own ESR story.
I was astounded. I'm hardly the biggest fan of political correctness. But to the extent that political correctness is a backlash, ESR represents what it is a backlash against.
The thing is that his advice offered there isn't even particularly bad. But he'd just taken the importance of being confident when hitting on women to a ridiculous extreme. With no self-awareness.
My ex found an unattractive cripple introducing himself. She didn't know who he was. His pass was essentially, "I'm worth $30 million. You may touch me. I might fuck you later." Her response boiled down to, "Eww."
She was shocked to later find out that his first line was actually true at the time. (This was a few days after https://lwn.net/1999/1216/a/esr-rich.html.) But, that notwithstanding, her response remained, "Eww."
No one actually believes this in practice. Everyone has a threshold of gross, spiteful or violent behavior in others beyond which everything they do becomes tainted. If ESR didn’t consistently use his platform to promote hatred, maybe he’d still be on the near side of that threshold for me and most other people.
I do. My threshold for judging content is: is the content valuable to me. This is a function of the content, not the personal history of the author or their associates.
A more relevant example would be: a scientist creates an algorithm and a paper that explains it. This scientist was also a serial killer. The algo and paper retain their value regardless of how many murders the author commits.
Their past deeds might be an interesting bit of trivia, but has no bearing on the value of their work.
I believe you, even though some people do not. Some people can make up many things, which can be good and bad regardless of what else they may have done. The article can be judged by itself rather than according to whatever else the author did.
(I am one who does not believe that "everyone has a threshold of gross, spiteful or violent behavior in others beyond which everything they do becomes tainted"; or, at least, it does not apply to myself.)
I read the article and I wouldn't describe it like this. it has a lot of commentary from the author, and a lot of pushing of their own ideals about what the open source community is supposed to be
Like their own controversial ideals that the open source movement should be "not racist" and "not misogynistic"? Or do you disagree, and think the open source movement should be as racist and misogynistic as Eric S Raymond truly is?
Your "observation" that "I read the article and I wouldn't describe it like this" is a statement about your opinion, which is wrong. What is the basis for your factually incorrect opinion about him? Do you have any idea who Eric S Raymond is? Have you ever researched his long record of racist and misogynistic opinions and public statements, or read his blog? Have you ever met him in person, and talked with him at any length, face to face? Have you actually known him personally for more than 40 years, like I have? Or are you just reflexively being the devil's advocate for somebody you have no knowledge about, arguing in support of them even though you don't actually believe your own claimed "observations"?
Stop trying to disclaim responsibility for what you're actually doing, which is attempting to carry the water for Eric S Raymond, by denying that the article says what it says. If you are arguing in good faith, then your observations ARE your opinions, unless you're trolling. You didn't write any disclaimer that you didn't actually believe what you were observing, and the assumption and rule for this site is that you should be arguing in good faith, not just trolling. Posting "observations" that aren't you opinions is the definition of trolling and insincerely "pushing your own ideals".
So what exactly ARE your ideals of what the open source movement should be, if not being "not racist" and "not misogynistic"? Or are you too coy and afraid to admit your own opinion in public? The vile person you're defending certainly isn't. Why do you agree with ESR in your belief that it's wrong to push the ideals that it should be "not racist" and "not misogynistic", and why do you disagree with those ideals like ESR so strongly and vocally does?
How long have you been involved with the free open source software movement, to have such strong opinions about its goals and ideals? Longer than I have? I send RMS the Copyleft (L) sticker in 1984, when I knew both RMS and ESR, after discussing the Gnu project with RMS personally, face to face, and borrowing a 68000 manual from him to work on some free software.
And also after often listening to ESR, who called himself "Eric the Flute", drone on and on endlessly about his "Teenage Mutant Ninja Turtle Net News Reader" at science fiction conventions, trying to dominate the conversation and interrupt people with a topic that nobody else was interested in listening to him brag endlessly about. And for your information, ESR never freed or shared the source code of TMNTNNR or collaborated with anyone on it: it was his own failed proprietary closed source "cathedral" project, that he was notorious for insufferably and arrogantly bragging about during the 80's, but never releasing, and finally giving up on because he didn't have the skills to finish and deliver it, and nobody wanted to collaborate with him or listen to him talk about it any more.
Has ESR ever personally hit on your wife and made her say "eww", like this guy documents?
I've certainly witnessed women being disgusted by his unwanted, aggressive, and gross attention, at many science fiction conventions during the 80's. He also would usually stink to high heaven with this certain stale decaying organic twang, as if the previous day he had wet sloppy sweaty sex, and then purposefully didn't take a shower or change his clothes afterwards, just so everyone could smell it and know that he got laid. Maybe that's just his way of attracting women and impressing men. What would you think if he hit on your wife, and she took him up on his offer?
How and why do you believe Eric S Raymond's racist and misogynistic ideas that you're trying to carry the water for are good for the Open Source movement? Or do you know him personally, better and longer than and I do, and well enough to confidently state that he's not racist or misogynistic, despite all the evidence?
If you're just pretending to deny Eric S Raymond's manifest and well documented racism and misogyny, but don't really believe what you're saying, then Popehat's Law of Goats perfectly applies to you and your trolling:
He who fucks goats, either as part of a performance or to troll those he deems has overly delicate sensibilities is simply, a goatfucker.
"He claimed he was just pretending to be racist to trigger the social justice warriors, but even if he is telling the truth, Popehat's Law of Goats still applies."
Why do we even have those weird characters in ASCII still? We need a newer version where the characters people actually use are stored in the least bits.
"That property is still useful, and thus in 2017 the AT convention has survived in some interesting places. AT commands have been found to perform control functions on 3G and 4G cellular modems used in smartphones."
But modern modems aren't connected over serial. There's no concept of line speed. This property is entirely irrelevant to modern hardware. A much more plausible explanation is simply that extending the interface made it easier to extend existing codebases to new contexts.
"IoT devices still speak RS-232"
This is actually dangerously untrue! RS-232 used positive voltages for 0 and negative voltages for 1, between 3 and 15 volts each. Attaching RS-232 to a modern IoT device's serial interface is likely to kill the device. What is common between old-school serial and the serial ports on modern devices is stuff that's out of the scope of RS-232 (eg, the 8N1 framing isn't defined by RS-232), and using the RS-232 terminology to describe it is about as accurate as calling it RS-422.
"If you know what UTF-8 is (and you should) every ASCII file is correct UTF-8 as well."
I mean kind of? ASCII is a character set, it doesn't define the on-disk representation. An ASCII file saved with each character packed into 7 bits isn't going to be valid UTF-8 without some manipulation. This is just an odd claim to make given the earlier discussion of varied word lengths and transfer formats that weren't 8-bit clean.
"But in 2005 Linus Torvalds invented git, which would fairly rapidly obsolesce all previous version-control systems"
(cries in Perforce)
Git was a huge improvement over CVS and SVN. Claiming that it rendered everything that came before it obsolescent just suggests massive ignorance of chunks of the software industry.
And yes this is all ridiculously pedantic, but given the entire tone of the article is smugly pedantic it doesn't seem unfair to criticise it on that basis.