It feels weird that so much old software from the 80's or even 90's and newer has to be reverse-engineered in order to be understood. The process almost seems like something an archeologist would perform on an artifact thousands of years old. But this software likely has people living who actually built the stuff and know how it works. I understand not everyone can be bothered in life to commit to a project, but surely a phone call could be answered occasionally. Is it just that knowing who these people are is too difficult or that existing copyright issues simply make it unsafe?
Assuming that someone has saved the source, you'd have to track them down. If it was work for hire, like say at a fruit company, that person probably doesn't have the rights to it. So you have to go to the lawyers. The fruit company lawyers might not actually be able to verify that the fruit company owns it, but if they want to try, it will require some effort to find the documentation, which will be somewhere in dead tree-based archives from the 1980's if it exists at all.
Which is to say that if you can identify the rights holder and it's not an individual, you have to convince them to spend time and money to release the rights. That they may not be able to easily confirm they have.
Or you could grab the binary and disassemble it and figure out what it's doing yourself.
It’s not that absurd that people conserve their actions, time and money and a different group of people spend their own time and money on understanding something that they value differently than the people who originated the software. You could impose obligations on people to release more than they do, but that will just change the calculus on what they choose to work on and release at all because now other people want to spend their time for them.
The absurdity is only a small one and does not imply any kind of suggestion about imposing obligations or anything like that.
Again, the details of how the situation came to be are not any mystery.
The end result, forget the chain of reasonable steps to get there, is in a sense silly, or perhaps remarkable, in the literal sense of simply "bears remarking", that one person is studying something like an archeologist assembling guesses and deductions from incomplete clues, while another person exists somewhere in the world who is physically alive and posesses the the answers natively that the other is guessing at.
No one said that it was a crime that demands action, and no one said anyone's individual actions, either the original developers or the "archeologists", are absurd.
We’re largely in agreement, but I think words like “absurd” are too strong in an Internet forum no matter how slightly it was intended because it doesn’t travel well. I also wanted to preemptively explore a possibility; that wasn’t to suggest it was something you said (you didn’t) or something I was advocating for (I’m not), just to flesh out the discussion here for future readers of this forum by offering up another perspective. I just don’t think there’s a good universal fix for this whether it’s an essential absurdity or an inconvenience, but as you can see in the other responses to me, some reading this will take what you or I said in a different and entirely predictable direction.
I just want people reading you, before they get to the “what can we do to resolve this situation?” phase to think through the implications and knock-on effects of trying to address it at all and the manner in which they do so.
>It feels weird that so much old software from the 80's or even 90's and newer has to be reverse-engineered in order to be understood. The process almost seems like something an archeologist would perform on an artifact thousands of years old.
This is a plot point in Vernor Vinge's A Deepness in the Sky. Thousands of years in the future, pretty much all software has already been written, so software archaeologists trawl through archives to find, debug, and run code (in as many emulated environments as needed).
Wonderful novel. Galactic standard time is still counted in the number of seconds since a certain epoch. (January 1st 1970 00:00 UTC as it happens, though both that calendar system, and the precise significance, if any, of the date, are lost to history by the time of the novel. The first moon landing, maybe? So few records from that time survive.)
Qeng Ho tradition holds that the epoch is the first moon landing, but Pham looked closer (he is a programmer-at-arms, it's his job to know these things) and saw that 1970 was the epoch.
People tend not to remember the details of things they worked on decades ago, let alone have access to source code or contemporary documentation. Even Jordan Mechner - enough of a design-note-taker and packrat to produce detailed retellings of his development process many years later misplaced the original source to Karateka.
In my quest to reverse-engineer the Ensoniq Mirage software I reached out to one of the original authors, who replied to my email and said that while he remembered working on the Mirage fondly he couldn't remember any details and no longer had any of his notes or paperwork from the 1980s.
I moderated a talk between 4 of the original team members at Acorn that wrote RISC OS [explanation below] in the 1980s a few months ago. They're all of retirement age now.
Three of them were full of anecdotes and stories and details about the company, the machines, the buildings, etc. The other could barely remember anything about it; his former colleagues were prompting him for details of what he wrote about 35Y ago, but he had little idea. Too much other work since, he said, and too long ago, and he'd forgotten.
I hang out in various retrocomputing fora for fun, and it amazes me the utter nonsense that some people come up with from vaguely-remembered things they knew 30Y ago. They don't know that they've forgotten, so they free-associate stuff and just make it up, and then many get resentful when their wildly wrong answers are corrected.
RISC OS, for youngsters and Americans (where Acorn didn't sell much) is the original native OS for the ARM CPU, written by the company that developed the CPU and the machine it would power. It is still around today, it's FOSS now, and it runs on some modern ARM hardware such as the Raspberry Pi.
It is a multitasking GUI OS with a rich desktop, networking, internet support, and so on. It ran in from 512kB to 4MB of RAM. It was the first ever GUI computer to anti-alias screen fonts as standard, to move the whole window when it was dragged, rather than a dotted outline and then a redraw when the mouse button was released. The OS, its GUI, its core apps (text editor, image editor, console prompt, filer, etc.) fitted into and executed directly from ROM, that is, without being copied into RAM.
It was also the first GUI with a dedicated panel showing icons for running apps, plus a system control menu, a clock, and so on -- it inspired the Dock in NeXTstep (which came out 2Y later), which both inspired the taskbar in Windows 95 (which came out another 7Y after that).
And the whole OS was written by about 6-7 programmers, and the vast majority of it in hand-coded assembly language.
I seem to remember that having a mouse with a context menu click, what is now usually the right button click, was another RiscOS innovation.
Acorn ARM machines always came with a three button mouse and I think the context menu was the middle button. I think the third button was used for cancelling things but I can't quite remember.
I had a PC with Windows 2 around the same time and it always seemed weird that that the right button on my MS mouse did almost nothing (I think you could use it for column select in Word but that was about it). It was like the hardware and software teams were working separately, which was probably the case.
The Macs at the time had only one button and that at least seemed like a deliberate design choice. I wasn't a fan of it but I could see that someone had deliberately decided to make things simple with click, double click and long click on a single button.
However, this was not an Acorn innovation: several Unix vendors, including Sun, also used 3-button mice, with comparable functions.
I'd argue that Acorn's innovation was formalising the functions and providing a rich GUI to use them.
1. The left button is called Select. It does what a left-click does on most OSes: select an item, and so on. It terminates the action: e.g. click Select on a button in a dialog box, it closes the dialog.
2. The middle button is called Menu. It opens a context menu related to what you click on. These are the only menus in RISC OS. There is no menu bar, no hamburger menu, etc., only context menus. Each app's global menu is the context menu.
3. The right button is called Adjust. It changes what it's clicked on without terminating it. E.g. click Adjust on the OK or Save button in a dialog, it applies the changes without closing the dialog. So, no need for an Apply button.
Adjust-click a scroll bar and it scrolls in the opposite direction from clicking with Select, so no need for a wheel mouse.
Lots of OSes have Select and Menu mice buttons now; nothing else has Adjust.
The good things: cleaner GUI (no visible menu bars or buttons, no Apply buttons, no file navigation in file dialogs because you drag and drop to save.) Richer control of that GUI with 1980s mice. (No scroll wheels or anything because they hadn't been invented yet.)
The bad things: a steeper learning curve. If it's your first GUI, it's harder to learn. Mouse controls are significantly more complex than GUIs with a 1- or 2-button mouse. But it's more powerful. If you already know a GUI, you have to unlearn stuff, which is harder still.
To use the classic computers/cars simile:
I do not have a car and don't drive much, but I would describe using a 3-button mouse in RISC OS as being like using a manual transmission. The Mac OS X/Windows model, with 2 buttons, is like using an automatic. It's easier to learn and easier to use, but you sacrifice a large degree of direct control. Scroll wheels are like flappy paddles on an automatic car's steering wheel: they try to restore a bit of control, but they only influence it a bit, you don't actually get the full native control back.
I tried RISC OS on a single-core Raspberry Pi and I was quite impressed at how well it ran. Significantly faster on this board than the stock Linux desktop environment based on LXDE.
Downsides are the usual, setting a native monitor resolution is not as easy or possible in the era of high resolution screens, the web browser cannot be used for the modern web.
I'm always impressed by those early brains that had no problem creating new things for the first time in assembler.
And as someone who used RISC OS 2, I should point out that RO2 was significantly smaller, faster, and far more responsive than RO3 -- and that's on an 8MHz ARM2.
RO is much smaller and quicker than any other modern GUI OS. I believe, from faint memory, that the first Raspberry Pi version the ROOL organisation demoed to Raspberry Pi founder Eben Upton didn't have drivers to access the SD card yet, so it booted into RAM and then couldn't load anything else.
He asked how big it was: they told him, it's about 6MB, compressed.
He said, "no, not the kernel, the whole OS."
They said "that is the whole OS!"
6MB for kernel, filesystems, GUI, desktop, terminal emulator, core apps (text and image editors, BASIC interpreter, etc.)
I've only ever seen 3 OSes that compare at all to RO in terms of snappy responsiveness.
2. The 1990s QNX Demo Disk showed that a desktop PC OS can rival RISC OS in compactness. It's the only one that could, though, and due to heavy compression it wasn't super fast.
3. BeOS was very close to as snappy and responsive, albeit ~2-3 orders of magnitude bigger. BeOS showed that PC hardware could compete: a PC in the hundreds-of-MHz class, with a 3-digit number of megabytes of RAM, could boot in seconds off a spinning rust HDD, and present a desktop with exceptional responsiveness.
Today, even a multi-core multi-gigahertz class machine running Windows or Linux or macOS feels like wading in molasses by comparison with RISC OS or BeOS.
Sadly, this includes Haiku and M1 Macs.
I've had no difficulty setting RISC OS on a Pi to whatever the native res of the connected screen was. Can't reproduce that one.
Modern browsers are coming; a WebKit-based browser is in testing now, as is a new TCP/IP stack with IPv6 and Wifi support.
> Significantly faster on this board than the stock Linux
You need to remember Linux, as did Unix before it, decouple the GUI from the OS (and the programs running on it) through multiple layers of abstraction, It was also designed to be ported to different CPU architectures, not for speed.
Unix as it was designed doesn't have any direct support for a GUI at all. Or networking, come to that.
They are not so much decoupled, as multiple separate trailers being pulled along behind on a tow-bar.
If you're on a Bash prompt on machine called "foo" logged in as user "bar", how do you look at the contents of a file called "readme" in folder "/usr/share/doc/" on a disk labelled "quux" on a machine called "baz"? Assume you have all necessary permissions to do this.
(Answer: you don't.)
If your terminal is on monitor #2 on machine F, how do you open a terminal on monitor #3 on machine B elsewhere in the lab, without knowing what desktop or display server it's running?
And all those separate trailers impose a performance penalty. You can have a very fast Unix machine that doesn’t have a GUI and doesn’t understand a network, and you can have one where your home is an NFS mount halfway across the globe.
My point is that there are very fast OSes which remain portable and have this sort of thing built in and integrated right into the kernel, such as Oberon and A2, or Plan 9, or Inferno.
So when you originally said:
«
... decouple the GUI from the OS (and the programs running on it) through multiple layers of abstraction, It was also designed to be ported to different CPU architectures, not for speed.
»
I think you are mixing up cause and effect, and trying to blame an accidental side-effect by making out that it's a core aspect of the design.
It's not that the Unix GUI layer is "decoupled" (as in, intentionally kept separate from) the kernel.
It wasn't. The kernel was designed without any thought of GUIs or consideration for them. There was no conscious layering or designing for portability here.
That's like saying... um... the new government after the French Revolution kept the railway system decoupled from government. It did not such thing. They had no conception or thought of railways; they're "decoupled" because one was bolted on a century later.
UNIX didn't decouple the GUI from networking, either. It never entered into the picture.
And yet, remember that the first lines of UNIX v1, long before C or anything, were written after Engelbart gave "the mother of all demos". This stuff _was_ happening, it was out there and it was on the radar.
Thompson and Richie realised their mistakes and they went on to fix them, in Plan 9, and then improve upon Plan 9 in Inferno.
But the Unix community had seized upon the older version by then and it wasn't AND STILL ISN'T interested in breaking what works in the interests of making it smaller, simpler, cleaner, faster and more efficient.
AmigaOS predates RISC OS by two years (1985 / 1987).
It was also a true preemptive multitasking OS (as you pointed out, Windows would only become preemptive with Windows 95), had the same move window behavior you describe (thanks to the Amiga's specialized chipsets, namely the Copper and the Blitter) and it also featured proportional scrollbars (another thing that would only land in Windows ten years later).
The 3.0 or 3.1 in the Amiga 1200 I got years later might have had that as an option in the Preferences, but I’m not sure. It used outlines by default in any case:
The rest of your post is, AFAIK, entirely correct.
I did see RiscOS at the time at an Amiga Party when the Acorn distributor in Spain was showing the RiscPC, and was amazed by it. I considered selling my 030/50+FPU accelerated A1200 and get a RiscPC but ended up not doing it. To this day I’m not sure that was the right call, the RiscPC did not have an FPU but otherwise it was much more adequate for the kind of applications I wanted to build.
All AmigaOS versions (for original hardware) dragged only window outlines.
However, there were freeware "commodity" programs available that would patch the OS to drag the whole window.
Various such desktop add-ons were quite popular.
In fairness, I've found this also tends to be the answer one receives when asking the author of some code from two years ago about some important details. Comment your code well, kids. Remember to document intent -- the why, not the what.
It's a sensible reply when you don't want to be accused of holding onto IP that you don't own.
I have kept source code from many projects that I've worked on but if some stranger asked me for details about any of them I'd give the exact same response.
At this point the company that bought the company that bought Ensoniq is gone, so it's hard to imagine who would come looking for 12kB worth of 6809 assembly code.
One thing to remember is that most Apple ][ software barely had source code as we would understand it today, as most code was written in 6502 assembly language rather than in a high level language (compilers at the time took too much memory and generated poor code anyway). So even if you had access to the original source pretty much the only advantage over using a disassembly would be that there might be a handful of comments.
In assembly you also have meaningful labels for branch destinations and routines in the program and global variables. Named constants for offsets into tables, bitmasks, and such. The original assembly language source could greatly ease reverse engineering, especially if it uses good naming.
Some assemblers have macros; it's much better to be able to read the macro invocations than their assembled expansions.
Software Heritage are solving this problem for current software by archiving software source code. I think that at one point they were talking about doing source code escrow, for folks who write proprietary software.
Shit, I can't even tell you what I was trying to do with software I wrote a week ago. I imagine the person who wrote it knows about as much about the code as the person calling them.
Someone please ELI5. I was no savant at 13 years old. But I remember being able to trace through 65C02 code and figure out enough of it to hack around and fix Software Automated Mouth to work in 80 column mode. When you redirected output from SAM to the screen it would revert to 40 column mode.
I remember also hacking around with other 65C02 assembly language applications and making modifications.
Over the next 10 years, I worked with other processors in assembly (68K, PPC, x86) and 65C02 was simple. Especially since most programs were written in 65C02 and not compiled down to it.
I think what helped most was that programs were small and that no tricks were necessary or even possible to be efficient w.r.t. cache misses.
If Linux ran on some multi-gigahertz 64-bit evolution of the 6502 the instruction set would still be simple, but a few Gigabytes of 6502 code still would be a challenge to understand, more so if that 6502 were out of order, speculatively executing, multi-core, and various parts of the code were written taking that into account.
What are you going to believe, the instructions in the binary itself or what someone may or may not remember when it was written? The latter is good if you want to have some historical context around certain decisions, but the former is far more concrete when trying to understand how it works and how to modify it.
Fonts: Improve ()^ glyphs
The parentheses strokes were a single pixel wide, which is much
narrower than most other glyphs. And the caret was so minimal it was
almost invisible.
Beef up these glyphs.
but doesn’t show the change. Anybody have more info? I’m curious as to how such an issue could have been present for almost 4 decades and on whether this really would have been better on the monitors of the day.
The name makes this sound like a more widely used and 'official' product than it really was - on the 8 bit Apple, this is a pretty niche thing. It's not the interface anybody (statistically) was actually using to interact with an Apple //ish computer.
Very few people used Apple II Desktop back then. To really be useful, you'd need a hard disk, which wasn't that common, and ProDOS software, which wasn't common either. ProDOS is also a single-task OS and Apple II software expect to have the full machine under its control (not sure Apple II Desktop has interfaces to build GUI apps for it - would be fun). All that ProDOS provides is a hook you call when the app exits so that a program launcher can be reloaded (which is, IIRC, what Apple II Desktop does).
This software is usable at the stock 1Mhz system clock found on all Apple 2 type computers except the GS (2.5 Mhz) and //c+ (4 Mhz).
At 1Mhz it is impressive. There are cards that offer up to 16Mhz clock speeds. And some can also have the 65816 CPU! Very little software supported that configuration. Merlin 16 assembler does for those wanting to program a 16 bit 6502.