As a sysadmin in the 90's it was considered malpractice to not have a copy of tomsrtbt[1] available at all times. The idea was, in a pinch, if you brick your internet gateway you'd have a chance of recovery by booting from tomsrtbt. It had just enough tools crammed onto the disk to fix configurations, fsck a disk, and rewrite the bootloader.
Only one of five mirrors works, but then I discovered that the last version is from 2002, so I guess it's surprising that Ibiblio still holds out.
I vaguely remember the name—however from some time recovery tools changed to mini-CD for me, which is not too large either with ~170 mb or something like that. I gotta find the name of whatever recovery-CD build I had, since it's got memtest86+, which can still come in handy…
Hiren's? It's got a pretty ancient version of Memtest86+ on it these days FWIW. Then again for some hardware I touch the newer memtests and Darik's don't boot.
There was a cottage industry of interesting floppy distributions back in the day. I remember Hal91 (mostly for the ascii art and the creator's email at an online.no domain - funny for a young student who hadn't internalised the meaning of ccTLDs) and muLinux (which used superformatted floppies to cram an amazing amount of stuff into those few megabytes).
I used to use tomsrtbt as a daily driver in an old 486 that I removed all the moving parts from. A totally silent machine, after reading the OS into memory from the floppy.
I built a linux on a floppy about 20 years ago, with a number of ethernet drivers built in (the common ones of the day, some 3com, tulip and a few others), X11 (tinyx with vesa fb, so close to universal support then, just not accelerated) with vnc and rdesktop. i.e. a thin client on a floppy.
it was a demonstration on how one dynamically compile (for smallest binaries), but then "reassemble" libc to only include the symbols needed for the actual applications. I demonstrated doing this with uclibc which didnt need the help as much as well as glibc (which if my memory serves me correct was basically close to the size of the floppy by itself). if all the binaries were statically linked, they would have been too big (because of duplication of symbols between them), but by essentially stripping libc of unneeded symbols, was able to create an embedded system that worked perfectly.
It might be a bit harder today for the same reason its hard to actually statically link glibc today. glibc at runtime can do dynamic loading itself, even if statically linked into the binary.
> if all the binaries were statically linked, they would have been too big (because of duplication of symbols between them)
Apropos of this, another solution I've seen (more of a dirty trick, but also greater space savings) is to build libc as a fixed chunk of page-aligned memory, and then de-duplicate the underlying disk sectors between different executable files. Works pretty well, assuming a read-only or cooperative filesystem.
I did the similar thing at uni, there was an arts lab with a bunch of dos only pcs that were never used, my boot disk had minimal Linux and X to start a remote session to the beefy alpha servers.
That's awesome. I recently ran [ELKS](https://github.com/jbruchon/elks) (Embedded Linux Kernel Subset) on a Toshiba T1200 laptop from 1987 with an 8086 processor. Booting into a unix-like system from a floppy with recently developed software is just surreal.
Kudos to the creator! Fitting a bootable Linux on a single floppy was already doable some 20 years ago, my firewall back then was in fact floppyfw (0) running on a 486 board, but in the meantime the kernel grew a lot, despite being much more modular than old ones. Userland utilities and libraries also grew in size, so the accomplishment is not trivial. Thumbs up!
This reminded me of DSL [0]. I used to use it as my router 15 years ago.
The box was loud while booting (because of the floppy), but once booted was completely silent as everything ran from RAM. The Pentium CPU had just a heatsink with no fan, and no fan on the power supply.
Nice work! I miss the sounds computers used to make back in the day! BTW what issues were you having with writing the image data to the floppy? Did dd not work for you?
I should see floppy drive as /dev/fd0. Just now as I double check everything I found out that external drives (USB) are shown as /dev/sd*. sdb in my computer.
I'd love to give this a try as a replacement for FreeDOS. I've used FreeDOS for a couple of silly embedded projects (the clock in my living room, for one), and the biggest limiting factors are:
1) Network support - it's glitchy at best; ODI drivers + Watcom TCP never seem to work for very long. If I toss out coreutils, that gives me close to 500k for a network stack.
2) Boot time. I suspect this will be _worse_ on Linux than on FreeDOS, but bootstrapping bare essentials like APM and an ANSI terminal driver do add overhead that's "just there" on Linux, so they may be neck-and-neck for all I know.
I seem to remember a distro for turning an old PC into a router called "Coyote Linux". It fit on a 1.44mb 3.5" floppy. There was another too that I'm not remembering currently. I find it amazing what can be done with simple computers and advanced software.
I used that as a kid, in place of Microsoft's internet connection sharing software that my dad had set up. I vaguely recall it worked better for multiplayer games.
Any layer 3 router is basically just a computer and software. Some of the more mechanical tasks can be accelerated in hardware, but that's only important with really high throughout setups. It is pretty impressive how much data you can shovel through a modest CPU, though. Part of that is the fact that CPUs process many bits in parallel. You could imagine a 20 year old CPU running at 100Mhz shovelling 50 million words per second between memory locations, which at 32-bits is 1.6Gbps.
I wouldn't overstate the software routing performance too much. That 100MHz cpu would probably struggle to route even 100Mbps, especially with conntrack or/and small packet sizes.
Also, in hardware routers packets are routed on dedicated custom silicon without them hitting the relatively slow general purpose CPU.
I was describing raw data shovelling throughput. Of course there's going to be per-packet overhead. 100Mbps internet connections were quite rare 20 years ago, so the point still stands that for a layman / average home LAN, general purpose CPUs can shovel an impressive amount of network traffic.
Sure, you'll likely need purpose-made hardware above a certain amount of throughput. I suspect that practical threshold these days is between 1Gps and 10Gbps, although it's a much grayer line than it was 20 years ago. The network interfaces are likely the bottleneck rather than the CPU's ability to shovel, and latency will always be higher than dedicated hardware.
There will of course always be a need for hardware to go faster than what a CPU can do. Tbps is becoming a unit folk use regularly.
Another fun fact is that Quake 1 fits on a floppy if you throw away the single-player maps and most of the character models, leaving like six of the models and a handful of multiplayer maps (wasn't there also six of them in total?). Which was enough for my school, where Q1 thus kept replicating onto computers every time after being deleted by the sysadmins.
This one doesn't includes GNU, but it is required for building it. I like the idea that knowing GNU/Linux like this allows for some makeshift tools to salvage or recover from situations that would be lost without it.
Needs a server or router and all you have is an old computer with some NIC's? All you have is a pendrive and needs to recover some files from an unbootable machine? Do not have msword but can use gzip and grep and wants to extract text or images from a docx? Got an old computer without harddisk but working floppy drive and needs linux running on it? Need to send an SOS AM signal and all you have in an old CRT? A modern linux distro with a few dev packages and system tools is like a digital swiss army knife on these situations.
Some of these scenarios are stretching the reality, but some are not. These skills are the modern IT equivalent to MacGyverisms.
I imaged Windows 95 and 98 PCs using "udpcast" from bootable Linux floppies back in the late 90s. I can't remember the kernel version now, but I remember that it was really nice when "make bzImage" was introduced. The kernel be stored bzip2-compressed. As I recall I got a few more KB file files in my root filesystem after that feature was introduced.
It's amazing how much comes back just researching for this comment-- fond memories of manipulating "root flags" with "rdev", for example. Knowing exactly what was going on, from BIOS handing off the boot sector thru the kernel starting up and mounting the root filesytem was a real treat.
Dealing with the unreliability of floppy disks and floppy drives, however, I will never feel nostalgia for.
How well did bzip2 run on Win9x Era computers? In my experience, that while the compression ratio was significantly better than gzip, both compression and decompression were significantly more CPU intense.
It was much slower that gzip. Decompression isn't so bad, but compression is very slow (epecially with "-9"). The compression ratio was better and every sector I could get back helped. My application was for unattended operation and I rarely needed to recompile the kernel so the time penalty wasn't something I noticed.
"Due to new features being introduced and the general size increase of the Linux kernel, devices now need at least 8 MB of flash and 64 MB of RAM to run a default build of OpenWrt."
I run OpenWRT on a device with 4MB of flash storage, and I update it regularly. The guide I posted from OpenWRT goes into specifics about targeting platforms with limited storage.
Can someone define the term "embedded" in this context? I had always assumed that "embedded" meant that the OS was written onto non-removable read-only storage. But I'm realizing that I don't actually know what the definition is.
Embedded is any low resource, cost-efficient computer system designed with a specific class of tasks in mind, as opposed to general computing devices like mobile phones (which aren't embedded) or PCs or servers. On the other hand, mobile phones actually contain one or more embedded systems as their components. See, pinephone gsm modem documentation for example.
This or its tape equivalent has been pretty much how UNIX installation has been done since time immemorial:
1. Your bootstrap ROM boots the system from a file on floppy, tape, or file server.
2. That file is either a second-stage bootstrap with more features that repeats this process, or an installation kernel that has support for a minimum/guaranteed system configuration.
3. You boot that kernel, and tell it where your installation root is (floppy, tape, file server), and it uses that to run the installer.
4. The installer walks you through partitioning and copies a miniroot to your swap partition, and reboots to that.
5. The booted miniroot completes the installation, whether prompting or using information stored to the miniroot from the previous step, and reboots.
6. At this point you may have more config to do since you’re running the “real” system. Some UNIX systems had fancy automatic configuration that would detect what devices you had beyond the minimum and rebuild your kernel with the appropriate drivers and tuning parameters, create the /dev nodes, and so on, and then reboot one more time to a “complete” system.
7. Done! Now is the time for the system administrator to back this all up, so all they need to do in the future is use the installer to partition and restore the contents of the partitions from a backup instead of distribution media.
I downloaded LOAF and tried it on a 386 circa '98, not really knowing what Linux was. Sadly it didn't boot for some reason because I could have started my enlightenment about four years earlier. Instead I spent the late years of my adolescence struggling to make "terminal" apps with Microsoft compilers. GCC would have been really nice.
Ah yes, tomsrtbt. I completely forgot about it. It even looks it's still going. Used to have that as a grub boot target, and it has saved my cheese more than once ..
Difficulty: What can you fit onto a 1.2MB 5.25" disk?
There's a retrobattlestations contest to "boot from a 5.25" floppy disk" coming up this week, so I've been down in the basement grabbing disks, and coming up all DOS..
Barely. If you want to just run some compiled C app then you don't need as much tools. You can even skip the BusyBox part and instead of shell running your application only.
But I don't think it is what you want. For sane set of tools it was a little bit over 1MB. Mine is bigger as I still need more tools for fixing/experimenting on live system (I'm using shell scripts for my application). In the end I will be removing more and more tools.
I'm disappointed there are no links to binaries/disk images to download. Building it all is overkill for the average reader to just try it out and see if it's at all useful.
I got similar problems when I missmach 32bit and 64bit code. First try to run in qemu 64 bit system.
I ended up building the whole system on a 32bit old laptop and just don't care about the problems with installing missing 32bit libs on my 64bit system.
I tried it but the qemu boot failed because /sbin/init exists but could'nt execute it (error -8)
I think there is a problem with the menu selections. Is there a command line way to select the right options? Or can someone share a working config file for both the kernel and busybox?
Assume you are hashbanging /bin/bash at the top of your /sbin/init. Ensure that shell does not have missing dependencies and is compiled for the matching arch. It can be fiddly to supply shared object dependencies for the shell. It is easier to use a statically linked shell while bootstrapping.
Idea: create a hello world program in c, configure your make script to statically link it, make sure it is compiled for the same arch as the distro you are booting (amd64 vs x86), place that as /sbin/init, ensure execute bit is set, create a new image, boot, and see if you get output.
3X the RAM (and 10X the clock rate) of the original VAX-11/780 that 4BSD (a full-featured UNIX including TCP/IP etc.) was designed on and for, the sort of machine which might have served an entire CS department (or other organization) in the early 1980s.
In 1999 or 2000 outside of Julie’s Supper Club on Folsom, some guys gave me a CD-ROM cut into the shape of a business card. It was a rescue CD, I think the company was LinuxCare or something
[1] http://www.toms.net/rb/