> That's insane. What's even more insane is that a bit over 20 years later homecomputers reached that frequency. And in the next decade they reached over 100 MHz.
The CDCs still had a good run. This line was originally released in 1964. It started at 10MHz, but that was 60 bit words, and special floating-point systems. IIRC floating-point multiplies were only one clock cycle; if that's correct, it took 100 nanoseconds.
The Apple II came out in 1977, 1MHz, 8 bit CPU and no floating-point circuits. You had to use many cycles to do any floating point, and typically you only used 32-bit floating point (because it was painful enough there). A single 32-bit floating point multiply took 3-4 milliseconds according to:
https://books.google.com/books?id=xJnfBwAAQBAJ&pg=PA26&lpg=P...
The original IBM PC came out in 1981. Its clock was 4.77 MHz. But again, that was misleading. Internally the 8086 was a 16-bit CPU but its memory I/O was only 8 bits wide. It didn't normally come with a floating-point processor. There was one, the 8087, and I think the original IBM PC had a socket for it, but it cost big $$$ and the 8087 wasn't actually available for purchase until ~6 months after the PC's release. That one could go 4-10MHz. If you bought a coprocessor, you were finally getting to somewhat similar speeds for numerical calculations... but that was 16+ years later.
Interestingly, the original "sx" designation for Intel 386 chips (80386sx) meant the same sort of thing... the 386sx was a 32 bit chip with a 16 bit bus. The dx was 32/32.
A product generation later, Intel changed what this meant to indicate whether or not the CPU had an on-chip FPU.
The CDCs still had a good run. This line was originally released in 1964. It started at 10MHz, but that was 60 bit words, and special floating-point systems. IIRC floating-point multiplies were only one clock cycle; if that's correct, it took 100 nanoseconds.
The Apple II came out in 1977, 1MHz, 8 bit CPU and no floating-point circuits. You had to use many cycles to do any floating point, and typically you only used 32-bit floating point (because it was painful enough there). A single 32-bit floating point multiply took 3-4 milliseconds according to: https://books.google.com/books?id=xJnfBwAAQBAJ&pg=PA26&lpg=P...
The original IBM PC came out in 1981. Its clock was 4.77 MHz. But again, that was misleading. Internally the 8086 was a 16-bit CPU but its memory I/O was only 8 bits wide. It didn't normally come with a floating-point processor. There was one, the 8087, and I think the original IBM PC had a socket for it, but it cost big $$$ and the 8087 wasn't actually available for purchase until ~6 months after the PC's release. That one could go 4-10MHz. If you bought a coprocessor, you were finally getting to somewhat similar speeds for numerical calculations... but that was 16+ years later.