Pretty interesting behaviour, but I don't think you can claim this is a fault in the controller. The contract was being invalidated when he wrote invalid data to a register. After that, anything may happen.
Today, most hardware has some protection against abuse, but in the days of the BBC micro it was common for hardware and os to completely trust the software. There were plenty of storys of damaged monitors because of invalid timing parameters.
One of my own storys, and I'm a bit hazy so some details are probably wrong: We found an old computer in a university dumpster some night, and decided to mess with the floppy drive. More beer than common sense around. There was some way to tell the hardware which track to read/write, with sane values from 0 to 79. But it was a byte , we could go to 255, so we decided to go up up up!
Well, the drive did exactly as commanded. 80 - 85 worked quite well, except the floppy wasn't guaranteed to be magnetically coated. But once we pushed it too far, the read head went literally over the edge: It jumped off the axle, dropped on the spinning disk below, got a serious yank, and the tiny wires got snapped off.
All of this with a single x86 out instruction, I think in dos 3.x debug. The OS was not stopping you if you did something stupid, there was no hardware protection or anything.
At least the drive I accidentally tried the same had a mechanical stop. Although banging the head against it 170 times or so wasn't exactly beneficial for its alignment.
Even today, one is better off not to write undocumented values to registers. True story: I had to investigate a SW bug report concerning a modern, fairly popular microcontroller. (I won't name which.) Sometimes data in RAM changed without our code writing to it. Turns out that our startup code accidentally wrote to a 'reserved' bit in a register, activating some kind of internal RAM test mode. This was confirmed by the µC's manufacturer.
This was how an Apple II made sure you were in track zero - by banging the head against the stop. This is why it makes that typical noise when booting up.
I really wish it was common for manufacturers to fully document features like that. Even if they "DON'T USE THIS, IT WON'T DE WHAT YOU WANT, WE'RE NOT RESPONSIBLE FOR ANYTHING YOU DO!", it might open interesting opportunities. Even stuff like the Z180 test pin or the 68010 internal processor state that's pushed on address and bus exceptions, it would be interesting to know what those things do.
> The contract was being invalidated when he wrote invalid data to a register
This is similar to the "undocumented instructions" on many CPUs of the time. The Z-80 was famous for reacting to invalid opcodes in somewhat useful ways. The 6502 also reacted in weird ways to invalid opcodes, but I don't remember any useful behavior. When the 65C02 came out (//c and //e enhanced) all invalid opcodes mapped to NOPs.
x86 had a few undocumented opcodes. AAD and AAM have a second byte that should always be 10, but turns out to be an argument for base10 . There was salc.
A CPU, when it encounters bytes in its instruction stream, has to do something, whatever. If there is no interrupt for invalid instruction, it has to do something else. Common behaviour is shadowing legal insn's , but completely new behaviour is always possible
Actually doing something useful on invalid opcodes is cool, but a bad long-term idea - you'll end up having to support all the accidental behavior that ends up being used forever, or risk breaking backwards compatibility.
Before analog monitors got a builtin delay for changing the video mode, you could intentionally wreck the device by writing a few lines of code in QBasic, and by the time someone had already power cycled the poor machine while troubleshooting, the evidence was long gone.
Analog monitors didn't get delay for the protection. It's been a side effect of auto-sync feature when monitor spends some time analyzing input sync signals to automatically adjust its timings.
Fascinating read, although I take issue with its title that implies that it's a bug in the chip. Maybe a bug in the author's code (when he wrote wrong values to the register), definitely not a bug in the chip but a feature. If you write undocumented values into a register, undocumented stuff happens. A lot of ICs have undocumented test modes that are used by the manufacturer.
Forgive me for being even more pedantic, but the person who received the self-destructing messages was Dan Briggs, who was replaced by Jim Phelps on season 2. Ethan Hunt only got the job on the first movie of the series.
Good reason to get a Gotek floppy emulator and work with copies instead. With the open source FlashFloppy software, they work great, and are pretty cheap.
Today, most hardware has some protection against abuse, but in the days of the BBC micro it was common for hardware and os to completely trust the software. There were plenty of storys of damaged monitors because of invalid timing parameters.
One of my own storys, and I'm a bit hazy so some details are probably wrong: We found an old computer in a university dumpster some night, and decided to mess with the floppy drive. More beer than common sense around. There was some way to tell the hardware which track to read/write, with sane values from 0 to 79. But it was a byte , we could go to 255, so we decided to go up up up!
Well, the drive did exactly as commanded. 80 - 85 worked quite well, except the floppy wasn't guaranteed to be magnetically coated. But once we pushed it too far, the read head went literally over the edge: It jumped off the axle, dropped on the spinning disk below, got a serious yank, and the tiny wires got snapped off.
All of this with a single x86 out instruction, I think in dos 3.x debug. The OS was not stopping you if you did something stupid, there was no hardware protection or anything.