I wonder if this could be creatively incorporated to allow running 8080 CP/M programs (albeit likely very, very slowly). I would imagine API calls could be forwarded up to the bare metal CP/M OS.
I was wondering the same, but then I thought it would probably be easier to just translate the z80 CPM binaries to 6502... That should be relatively straightforward, and since CP/M relies heavily on the BIOS/BDOS ther shouldn't be that many issues... But I'll probably just watch his series on YouTube first, maybe that's something he's planning/thinking about too
but then I thought it would probably be easier to just translate the z80 CPM binaries to 6502
All the CP/M binaries are for the 8080, not the Z80. The only place you might find Z80 code in a CP/M system is the machine-dependent I/O code in the CP/M BIOS. That will necessarily be different for any 6502-based computer hardware, and need to be rewritten, anyway. Or in the third-party apps. Though even there, most of the third-party companies restricted their software to using only 8080 code so their user-base was as large as possible.
The important part of the CP/M architecture was the thousands of independent apps that used CP/M as their underlying OS.
Very probably, the best 6502-based approach would be to keep all the CP/M system software and apps as 8080/Z80 code and use a 6502-based Z80 emulator. The BIOS itself could be replaced with native 6502 code as it uses a jump-table for the entry points.
I dunno about cp/m binaries in particular, but static translation of machine code from that era can be really difficult because of mingled code and data, self modifying code, and unclear bounds on jump tables.
So it might not be very easy at all, depending on how clever the people who wrote it were.
A neat fact I recently learned: Bill Herd (designer of the C128) said in a presentation at VCF East earlier this year, that the Z80 in the C128 was done because the C64 Z80 cartridge had power and heat problems. It was easier to just put it in the C128 versus trying to make the C128 compatible with the C64 Z80 cartridge.
I'm bothered a bit about the use of llvm-mos (or really any C compiler) for developing low-level 6502 code. The 6502 is my favourite CPU, but it's not very well suited to C, and most C compilers don't emit well-optimized 6502 assembly compared to what a manual coder would do.
If he had to use a C compiler just so he didn't go insane, however (which I sympathize with), it seems to me that cc65 would have been a better choice and certainly more widely used. But most assemblers implement a good macro system which helps with sanity.
>The above code fills a graphics 8 screen. CC65 ran in 4587 jiffies while MOS did so in 2631. Another test leveraging lookups saw CC65 running in 2663 jiffies while MOS ran in 396 jiffies.
> I have C code that leverages page zero that brings CC65 down to 664 jiffies while MOS runs in 358.
Interesting post, thanks. It certainly seems like llvm-mos' codegen is improving markedly modulo some concerns brought up over bloat. But I also note that a number of the performance recommendations (prefer using more globals, avoid passing structures presumably due to the limited hardware stack) aren't exactly current programming convention - and for that matter, they're exactly what you would do writing assembly by hand. These recommendations would be true for cc65 too of course, but it still makes the point that using C to develop on 6502 is just making the dog walk, not the dog walking well.
The AtariAge results are fairly out-of-date; I think that's even before we started doing whole-program zero-page allocation.
The only real 6502-specific C caveat left for llvm-mos is that you should strongly prefer structs of arrays to arrays of structs; and that's not even that 6502-specific. Otherwise, standard C gives fairly tight assembly.
That being said, every couple hundred lines of generated assembly for any reasonably-sized C program will contain at least one WTF, from a human point of view. Removing those WTFs one at a time is the long tail of a compiler engineer. Still, I'm not going anywhere!
"As for the code itself, we perform a remarkably effective loop optimization that detects 16-bit index operations that can be converted to a 16-bit index plus an 8-bit offset. The latter is a directly-supported addressing mode on the 6502, and 8-bit index manipulation can be done in a single instruction.
This allows us to convert idiomatic 16-bit "int c" loops into something much more suitable for the 6502. Eventually, we hope that optimizations of this kind will transform standard, naive C code into tightly optimized 6502 code. "
I feel kinda sad that macro assembly is a lost branch of the programming language tree. to me it feels pretty lisp-like in that its more of an exercise in world-building than 'a then b then c'. it also makes it alot more feasible to use all those interesting processor-specific frobs.
I guess assemblers would have had to have evolved past fighting over intel vs at&t.
I mean, I think it suffers the same problem as lisp on that front. If you're coming into a codebase would you rather read boring C or try to figure out the mind of the person who crafted the lisp/macros in it?
Even C codebases can suffer this with obnoxiously clever macros, but it's rarer at least.
cc65 did not support the tricks he needed to do to support relocatable code, or at least he couldn't figure out how to get cc65 to support them. From another forum:
"I was going to do it all with cc65 but then couldn't figure out how to persuade cc65 to produce CP/M-65 binaries with the relocations"
Since CP/M is a primarily text-based OS, I wonder if I could port it to the 65CE02 in a Commodore A2232 serial card. It only has 16K of RAM, but the RAM is fully accessible by the m68k in the host system, so it should be easy to load the OS and give it access to a virtual disk.
This is an interesting rabbit hole. I'd never really looked into the 65CE02[0]. I guess I always just assumed it was a die shrink of the 65C02. I didn't realize that the chip had really interesting new features. It's a really cool evolution of the 6502.
- yes, that 8080 emulator ought to work fine. Not sure what the performance would be like but it's certainly worth a try. Sadly, unless the author shows up and relicenses it I can't use that one ('may not be redistributed without express written permission of the author').
- llvm-mos: the only bits in C are some of the utilities, and that's only because I happened to have (buggy) C versions from one of my other CP/M projects. What I'd actually like to do is to write a PL/M compiler with LLVM and try and compile the original Digital Research code for the 8080 with it.
- re porting: should be trivial to do! The only thing you need is a BIOS, with console I/O and sector read/write. Everything else is the same. You can even use the same binaries. But the documentation is trivial, so if you're interested file a github issue and I'll try to assist.
Also, I _really_ want some software for this thing, primarily programming languages. I do plan to port Microsoft's BASIC (now open source), but I need an assembler. If anyone knows any suitable 6502 assemblers from back in the day _which have actual licenses_ please let me know.
Honestly, any deep dive into the CP/M BIOS/BDOS architecture should suit you.
It's really quite basic.
The BDOS was common across platforms, with the machine specific BIOS acting as the interface layer between the BDOS and the raw hardware. The BIOS deals with all of the interrupts and shoving values in and out of I/O ports (or addresses) hoping the hardware behaves.
A better approach would be to compare things like CP/M, FLEX for the 6800, and even TRSDOS for the TRS-80. CP/M and FLEX are similar in that they're "generic" OSes for different hardware. TRSDOS is interesting because the DOS offered actual system services.
These are more interesting than the ROM based "OS" of things like the C64 and Atari. For example, the C64 doesn't really have a "DOS", it has a smart peripheral that you connect to over a serial interface and send commands. The "DOS" is actually implemented on the disk drive, not the host machine. Similarly with the Atari, it had a very nice device driver level abstraction, but that was as far as it went.
The novelty of this project is the relocatable binary loader. CP/M didn't have one of these (it didn't need one). The memory maps of the various 6502 machines are quite varied. Having large chunks of the RAM mapped to video displays didn't help matters much. And all of the systems were pretty fast and loose with low memory.
CP/M needs, like, 16 bytes of low memory "committed" for the OS, then everything else is shoved to the end in high memory. The 8080/Z80 used I/O ports compared to the 6502s memory mapped I/O (yet more holes in the 6502 memory map).
To be fair, the TRS-80 has a much different memory structure than a random CP/M machine. CP/M machines were designed FOR CP/M, but the BIOS let them have all sorts of flexibility for disk drives, I/O and other hardware. Out of the box, raw CP/M can not run on the early TRS-80s, lower memory simply isn't available.
It's interesting to note the smaller TPA size on the Master (25kB) vs C64 (46kB) as the Master came with 128K minimum.
I know that the Master still had a 16bit address bus but I was under the impression that it would still be possible to get access to a full 64K in one go if needed (the rest being paged in and out).
No, the Master followed the BBC Micro architecture, which was 32K RAM followed by 32K ROM, made up of 16K paged ROM then 16K of OS and memory mapped peripherals. The other 96K that made up the 128 was 64K in sideways RAM (which could be paged into the pages ROM area, 20K that could be paged over the top of the RAM (where the screen data would live) and 12K that could be paged over parts of the OS ROM.
On the Master I could fairly easily extend the TPA up into the sideways RAM area, adding another 16kB, for a total of probably about 45kB. The difficulty is that filesystems in Acorn's MOS can't access sideways RAM directly, as that's where the filesystem module itself lives, so all I/O would have to be done by copying the data to low memory, calling MOS, then switching the sideways RAM back in again afterwards. There may also be subtleties in catching interrupts and things which might try to change sideways RAM pages. But it should totally work.
For the _really_ advanced version, it could run bare metal and reuse the OS workspace at 0xc000 all the way up to 0xe000. But that would lost all MOS services, so would have to reimplement things like screen redrawing and floppy disk handling.
Building it on top of ProDOS might be fairly doable.
ProDOS provides an API for block-based file I/O that's fairly simple and supports multiple different drive types. On a 128k system you'd get a 64k RAM disk for free. It might fit nicely with the current code.
And ProDOS already lives in the bank-switched Language Card area, so there would be good chunk of main RAM to work with, a little under 46K from $0800 to $BF00.
Both of the currently supported platforms, C64 & BBC Micro, also had Z80 add-ons available which allowed running CP/M. I don't think the project is necessarily about it serving people better.
C128 CP/M is slow compared to other CP/M systems due to oddities of its implementation, but it works, it's native on the Z80, and with a 1571 you even get disk compatibility. In 80 column mode, it's certainly fast enough.
I think the point is that if your use-case actually needs stable CP/M, the original Z80 version running on an add-on card is more useful than a hobbiest port.
Old 8-bit systems still run many businesses just fine.
I'd rather run some critical part of my business on an Apple II with software that's had 30 years to eliminate every last bug than some IoT piece of crap.
A couple years ago I shut down Chicago's premier burger joint for an hour because I tried to pay with cash. Their IoT cash register locked itself and their fancy iPad based POS system refused to take any new orders. They were trying to figure out who to call in SF, had to @ them on Twitter for support.
It was ridiculous. Normalize running your business on small computer systems.
Running a business on an Apple II is at least as silly as running it on an
iPad since any kind of hardware failure is going to be harder to quickly resolve (unless you’re the kind of business that has electrical engineers to hand. But even then, it’s still not as quick as buying a new stick of RAM from the local computer store).
There is a sane middle ground between your experience and running CP/M on a 40 year old computer.
> software that's had 30 years to eliminate every last bug
Bugs can exist in hardware too. Plus I wouldn’t be so quick to assume that every last software bug would be fixed. For starters any bug fixes in that time could introduce new bugs. And there are often bugs that are time specific (like epoch overflows).
Sure and if you own a BBC Micro/Master you could do the same via something like the Torch Z80 second processor. The whole point of this project was to build CP/M for the 6502, most likely to scratch an itch which many of these comments seem to completely miss.
The guy wanted to see if it could be done on a 6502. Who said it had to be 'useful'? Why are you so sure your definition of 'useful' and his definition of 'useful' are the same here? Why does virtually every post on HN about someone's personal, possibly quixotic quest to do something for fun attract a pack of tech-Karens who tut-tut it with "well OBVIOUSLY you should have done something different than what you did. I should tell the Manager.".
Does anyone want to buy a paperback copy (2022 reprint) of CP/M 3.0 System Guide? :)
I have a few copies (had to get 10 printed to get anything printed), so far only on Polish allegro.pl, not yet eBay
I wonder if this could be creatively incorporated to allow running 8080 CP/M programs (albeit likely very, very slowly). I would imagine API calls could be forwarded up to the bare metal CP/M OS.