Yes, the draft was mentioned in the article as well. However the version 1.0 paper was dated 30th September 1980, although it wasn’t in the ACM until the following year.
I wonder why the broadcast address was changed from being all zeroes to all ones in Ethernet II (is that the right term for the Ethernet we all use today?)
In general, ethernet doesn't do collision detect now - practically every wired ethernet goes from a machine to a switch on a full duplex link, with the switch having buffers to cope with collisions - worst case the second packet is discarded, no need to back off at a random interval.
Does wireless ethernet have CSMA/CD, or is it some better time based protocol?
> The original implementation used coaxial cable, as it was widely available for TV sets, to be able to act as the physical layer of the network – although with coaxial/10Base2 networks, the end of the co-axial cable run required the use of a terminator to avoid signal reflection.
Reminds me of the early 90s when I convinced our company to try LANtastic instead of the bulky IBM Token Ring network we had at the time. Tiny coax cables with BNC connectors and terminators, and no central server or MAU. Each workstation was its own "server" and we could finally use our IBM AT for actual work instead of having its RAM stuffed full with IBM network management software! Hell of a lot cheaper too!
Well, the random delay thing is not really relevant in modern switched networks where each switch port is a separate collision domain.
The whole core concept of Ethernet of a shared "ether" which messages are sent through has pretty much died because switched networks provide better performance and moores law made them economical
Check out CDMA. There's a blog owned by one of the original engineers at Qualcomm that goes into the history and theory of CDMA. Unfortunately I can't find it.
Ethernet in the consumer space has been stuck at 1 Gbps for decades (1000BASE-T dates back to 1999.) I'm waiting to see a basic 8-port >= 2.5 Gbps switch for under $100.
If I had a time machine, rather than killing Hitler, I'd go back and convince them to make the ethernet header 16 bytes rather than 14 in order to keep things nicely aligned.
A 14 byte header causes lots of alignment issues.. I first encountered this on DEC Alpha, pre-byte/word extensions. When using network drivers not designed for alpha, the kernel would take an alignment fault when accessing 4-byte IP addresses which were aligned on a 2 byte boundary.
While you're back there, please have them implement router based IP fragmentation as simple truncation (and some flag), rather than turning it into two packets or dropping and sending an out of band response that often gets lost.
Cool. It’s neat that some ideas from the original Ethernet paper are still used today. I wonder how it’ll be 100 years from now —- will we still use MAC addresses?
* https://www.cl.cam.ac.uk/teaching/1920/CompNet/files/p395-me...
* https://dl.acm.org/doi/10.1145/360248.360253
* https://en.wikipedia.org/wiki/Ethernet#History