Boy, I remember the days of thick coax, pre-commercial Ethernet and Ethernet workalikes.
For the MIT AI Lab, LCS and EECS departments, we strung the fat coax all the over the campus for CHAOSnet (home-grown MIT equivalent used to connect the Lisp Machines and DEC-10/-20 machines with vampire taps), and any kink/dent would cause massive signal loss. So we'd get out our trusty TDR (time-domain reflectometer) and find where the kink was on the cable from the test point, then climb through ceilings and crawl spaces to cut and splice it out, or just replace whole cable segments.
Indeed! I was one of what I called "a cast of tens" in that effort, responsible for a MIT Math Department applied math computer lab, a long run to the south corner of Building 2. We didn't have significant problems with our cable, but I did hear of others, and there was the fun time "a backhoe tried to connect to the CHAOSnet" at Vassar Street, between the main campus and the AI Lab and LCS.
I still miss symbolically named ports (e.g. "TELNET") vs. IP's numbered ones (then again IP had to run on the ARPAnet's old routers and was explicitly a WAN and a very well designed one, CHAOSnet was a LAN which was something of a quick hack, like NFS).
Yes, I remember some backhoe incidents, perhaps the same ones. ;-)
Funny, IP at the time seemed very clunky and verbose to those of us used to the concision of WANs like the pre-IP ARPAnet host-to-host protocols, and I remember the massive hair involved in the first DEC-20/TENEX implementations of TCP/IP. It took a while before people really understood TCP/IP enough to implement it well. (Some of the faster, smaller implementations first came out of Dave Clark's group at LCS, he being one of the designers of IP.)
Clunky and verbose by comparison indeed, but at least it was to a purpose. I can remember the day/night? the ARPAnet turned off the old "NCP" protocol, and from MIT-Multics we were able to e.g. ping a machine in Norway.
What's really impressive is how it's stood the test of time, albeit with critical implementation improvements like Van Jacobson's flow control (http://en.wikipedia.org/wiki/Van_Jacobson). But we did have to scramble for quite a while.
did coax ethernet ever work reliably? I remember spending entire days fiddling with cables and terminators trying to get multiplayer quake to work over a network, we almost always had to resort to (very slow) serial ports instead.
Yeah, coax Ethernet did work reliability, or at least enough to get the job done, but none of us missed it when twisted pair became available, it was a major pain when a physical disruption would take out a whole cable segment. Note the original coax for commercial Ethernet was quite a bit narrower and more flexible than that used for the CHAOSnet. From very old memory, something close to maybe 3/4ths? vs. 0.375 of an inch for what came to be named 10BASE5: http://en.wikipedia.org/wiki/10BASE5 Original research Ethernet also used narrower coax.
MIT did use it for some years before commercial Ethernet eventually displaced CHAOSnet (I was gone by then), but, yeah, it was a constant struggle to keep it working reliably. People would work in the cable spaces and ding or crimp the coax outer shield, and we'd have to go find it.
"Ethernet was developed in the context of the internet with its seven levels of the ISO reference model," he said. "So we had a job to do at level one and two, and we didn't burden Ethernet with all the other stuff that we knew would be above us. We didn't do security, we didn't do encryption, we didn't even do acknowledgements."
Very interesting story—I wish it had been more detailed in places, but I love this middle section, about how simplicity sometimes beats complexity.
It's also amazing how prevalent Ethernet still is, even when wireless is a competitor. The other day I left this comment: http://news.ycombinator.com/item?id=5052448 on HN, because in some circumstances running a cheap ethernet cable from a router to a desk, couch, or other work station can still be a real win, especially given how inexpensive even very long ethernet cables are from Monoprice.com.
They last forever, aren't subject to the level of interference wireless is, and, in many conditions, have faster data transfer speeds. Ethernet is still great.
Unfortunately these days it's either painfully slow or painfully expensive. Individual disks have been able to outpace GigE for years, even ignoring SSDs and modest RAID configurations, and 10GigE isn't exactly sprinting full pelt into the general market, with cards starting at around £300 and the less said about the switches the better.
It's very lame how I can pump out >400MB/s locally on a cheap NAS with some old spare HDs, but can never get more than about 80MB/s out of it over SMB. Thanks for helping make my backups take about 4x longer than they should, Ethernet.
I'm half tempted to give a second hand InfiniBand setup a try, seeing as 10Gbit cards can be had from about £15.
For the MIT AI Lab, LCS and EECS departments, we strung the fat coax all the over the campus for CHAOSnet (home-grown MIT equivalent used to connect the Lisp Machines and DEC-10/-20 machines with vampire taps), and any kink/dent would cause massive signal loss. So we'd get out our trusty TDR (time-domain reflectometer) and find where the kink was on the cable from the test point, then climb through ceilings and crawl spaces to cut and splice it out, or just replace whole cable segments.
Fun times!