Hacker News new | past | comments | ask | show | jobs | submit login
ZedRipper: A 16-core Z80 laptop (chrisfenton.com)
814 points by pmarin on Dec 10, 2019 | hide | past | favorite | 149 comments



Sounds like 1/16th of a ZMOB! ("The computer of the future, using the processor of the past.")

https://www.ijcai.org/Proceedings/81-2/Papers/071.pdf

>ZMOB: A NEW COMPUTING ENGINE FOR AI. Chuck Rieger, Randy Trigg, Bob Bane

>ABSTRACT: A new research multiprocessor named ZMOB is described, and its significance to AI research is surveyed and characterized. ZMOB, under current construction and scheduled for completion late Fall 1981, is a 256 processor machine with a high speed, interprocessor communications system called the "conveyor belt", a non-blocking, 20 megabyte/second message switcher. Because of its large number of processors, high computational throughput (100 million instructions/sec), large cumulative high speed memory (16 megabytes), high interprocessor communications bandwidth, and external communications and sensing channels, ZMOB opens some new areas of exploration in object-oriented modeling, knowledge-based and domain expert systems, intelligent sensing and robotics, and distributed perceptual and cognitive modeling. The paper attempts to blend a description of ZMOB's hardware with ideas about where and how it fits into various types of AI research.

https://retrocomputing.stackexchange.com/questions/5080/fast...

>The fastest Z80 computer ever designed and built was almost certainly ZMOB, a 256 node Z80A cluster designed and built at University of Maryland as part of NASA NSG-7253. That's a total of 1GHz of Z80 power.


I had never heard of that machine until I posted this and someone sent me a link to it. The architecture of the ZedRipper is actually incredibly similar - each CPU has its own dedicated 64KB of memory and then an I/O mapped high-speed synchronous ring bus connects them all. On a MHz * CPUs basis, mine should actually be slightly faster =)


Bob Bane hangs out here regularly!

By far the most important feature of ZMOB was that each board had four differently colored software blinkable LEDs, all arrayed behind the smoked plexiglass cover of the rack they were all plugged into. It looked really cool and was great for hypnotizing and mesmerizing people, which was how we got a big NFS grant (which we spent on Vaxen and Suns and Xerox workstations).

https://news.ycombinator.com/user?id=bane

Here's a photo of some UMD people in front of ZMOB in the Department of Research Simulation, with some white Xerox file servers and black Sun server in the background:

https://i.imgur.com/zLjYTS3.jpg


That was probably me, I posted a link in the Hackaday comment section. Don, I forgot about the colored LEDs. I need to find a picture!


Sorry, the only photo I have is monochrome (linked above). Maybe Bob Bane has some color photos! It ran a "Dining Philosophers" simulation that did a lot of cool blinking, and I ported FIG-FORTH to it, which was great for writing LED blinking programs.

https://www.cs.umd.edu/sites/default/files/zelkowitz-report....

Laboratory for Parallel Computation and Z-MOB Around 1980, Dr. Charles (Chuck) Rieger (Figure 3.12) designed a computing system to run as a network of processors. The initial design consisted of 256 Z80A processors. Each contained 64K memory, linked together on a “conveyor belt” (a 48-bit wide slotted ring architecture), as packets of information were passed from one processor (or “moblet”) to another. This “mob of Z80s” soon became known as Zmob. The system was programmed and controlled by a host VAX 11/780 computer. An Air Force Office of Scientific Research grant was obtained for building the basic Zmob hardware.

It was apparent to both Minker and Basili (after addressing the crush of student majors) that to be a top department, we needed more equipment. Minker was behind the acquisition of the VAX 11/780 (Mimsy) in 1980, which was a good start, but was insufficient. We needed terminals on every desk, additional machines like Mimsy, and a laboratory under our own control. Computers were our technology and if we couldn’t have and control our own machines, our research in this area would always be hampered.

In 1982 Basili organized a proposal team, consisting of Agrawala, Minker, Rosenfeld, Stewart, and Weiser. They met and came up with the idea of a parallel computation laboratory, using some of the ideas in Zmob as a starting point. A Coordinated Experimental Research (CER) proposal was written to NSF, and in 1983 it was funded as a 5-year $4.3M grant. This grant greatly expanded the departmental laboratory, purchasing machines tove, gyre and gymble, as discussed earlier.

PRISM, under the direction of Minker, was the core of the CER project. It was conceived as the software system designed to run on Zmob, and was run as a simulator on the VAX. It used Horn clause logic (similar to the design of Prolog) to implement AND/OR parallelism. Users could specify the number of problem solving machines (i.e., moblets) to use, the number of database machines to use, and the number of machines to handle executable procedures. Statistics were collected to determine how effective the search strategy was. Users could also run PRISM in a sequential Prolog-like manner.

Continued Zmob research was funded by this grant and a version consisting of 128 moblets was built and installed in the Department laboratory (Figure 3.13). Moblets were accessed by physical address or by pattern matching. Programming in Prolog like languages was seen as the mechanism for using Zmob. Mobix was the operating system designed to run on Zmob, which would hide many of the complexities of the Zmob hardware.

Due to inexperience in hardware design, Zmob ran, but not as reliably as desired. The Z80s were 8-bit processors and became obsolete too soon. 16 bit and 32 bit versions of Zmob were designed, and a 16 node ring of 16-bit processors using the Motorola 68010 processor, called McMob, was built and used for several years. Initial design work for a further advancement, called Chessie, was started, but a prototype Chessie was never built. Work continued for several years, but after the NSF grant that suppported most of the development ended, work slowly stopped on that activity.

Mark Weiser’s foot “mole” of 1986 was another interesting idea that grew out of the CER grant. Instead of using a mouse controlled by the hand, a foot pedal was developed, eliminating the need to move the hands off the keyboard in order to control the screen. Several variations were built and tested on Sun workstations, but the concept never became fully operational. Unfortunately for the Department, Weiser soon left for a research position at Xerox Palo Alto Research Center and this activity also ended.

https://en.wikipedia.org/wiki/Mark_Weiser


Funny story, in my undergrad I worked at UMIACS and we had to install a set of foot pedal mice buttons in an office next to Minker's. The user would move the mouse on their desk, but click with their feet.


I used to joke that in the future, EMACS would require foot pedals to handle all the different modifier keys it would want.


I love all of the weird computers that got built by academics in the 1980s! That sounds like such a cool machine.


This is why I read hacker news! Awesome story thanks!


I’ve recently been working on an ESP32 variant of something similar [0]. ZMOB-like designs are so much fun to think about.

[0] https://twitter.com/eismcc/status/1201310140416217089?s=20


My own experiment has recently been a Z80 hooked up to an Arduino - which fakes I/O and RAM accesses to/from the chip.

I got as far as getting BASIC to run, but struggled porting FORTH to z80. Existing Z80 forth interpreters I found need almost 64k but wiring up some static-RAM hasn't yet given me a functional system so I'm hazy on what is wrong; either my real-RAM or the interpreter itself.


My issue when I build my homebrew computer in college was that I didn’t think to put biasing resistors between the z80 and SRAM, since it’s TTL.

You probably didn’t do something that silly though.


That's great! The idea of a cluster with a wireless interconnect is hilarious to me for some reason.


> [...], interprocessor communications system called the "conveyor belt"

Is that where the Mill architecture has the belt concept from? I think i heard some belt concept in the Mill's architecture's context.


Nah, looks more like an NoC fabric.

The belt is more a temporal view of the bypass network instead of a register file.


I love the Z80 too, and Turbo Pascal! In 1980 I was writing custom comm software for TRS-80 then Kaypro and Vector Graphic model 3. I wrote in assembly a CP/M TSR that loaded, hooked BDOS and allowed any existing program to transparently "encrypt" to and from disk by opening files matching a selectable mask (like !???????.???) so you could save "encrypted" docs from any app like WordStar. I use "encrypt" in scare quotes because it was only a snake oil XOR proof of concept... I had DES flowcharted out but saw right away that proper implementation would murder the TSR's size and throughput with 4MHz Z80 would be dismal.

Too bad, lost interest. Only a few years later with 16bit and MSDOS and faster CPU it became practical and I could have been the first such product to market but I was stupid.

I also had written a Turbo Pascal TSR program (using a 3rd party TSR toolkit) that was like strace on linux, writing system calls to disk in the background. I used it to troubleshoot many application program mysteries.


Get an Amstrad CPC6128, rig it out with an SD card, and experience the joy of 2019's releases for what is, arguably, one of the greater Z80 systems of the era.


Back in the 80s I had a TRS80 with 5 inch floppy drive, and an Amstrad CPC464 with built in cassette tape.

Having no patience to wait for games to load, I linked the two together via the parallel printer interfaces and wrote some Z80 code on either end to allow the TRS80 to store CPC464 programs.

I seem to remember only being able to use 6 of the bits for data, and using a couple of the others to signal as "clear to send" between them. The transfer was very quick, much faster than tape, but wasn't very reliable due to the dodgy connectors on the TRS80.

Using an SD card with the old micros would be miles better.


Turbo pascal and bridge got good memory. Hunt down a version that can run.


As a lover of the Z80 and "what if" scenarios, I think this project is fantastic. Hacking in the purest form, for no other reason than you could. Someday I'd like to do something similar, but until then I'm just happy to know it's possible and live vicariously through updates.


Disclaimer: Not a hardware engineer, nor a CP/M aficionado (though I definitely remember Gary Kildall), but a former AST Research (later AST Computer) employee. AST was an "expansion board" manufacturer turned systems OEM. During the transition from memory and connectivity boards to desktops, the company's top hardware engineers explored an architecture they based on an "intelligent, arbitrated, multi-master bus." The notion was to speed up operations and increase power by designing a bus that accommodated multiple processors (on boards, of course) operating independently - arbitrated by a central CPU. It never flew - but I always thought it an elegant model. This was in the 1986 timeframe, long before GPU's became popular.


Funnily enough, I just saw an old Computer Chronicles episode the other day which talked about how slow the bus was and they had Dell showing off a 486 + graphics board blowing away the same 486 and an ISA? graphics card.

If you haven’t seen it already, the 8 Bit Guy has a fun video about working in AST’s tech support in the 90’s:

https://youtu.be/2hdazA-VUf0


Wow, I haven't seen that name in a while. I had a SixPakPlus back in the day.

(I couldn't remember what the sixth function was, turns out it's "new compact size." http://chiclassiccomp.org/docs/content/computing/AST/000490-...)


David Murray? Or did you work with him? Probably not, but I figured it's a shot in the dark.


Tangentially related question: how does one learn "computer architecture" or "computer engineering" as a hobbyist? I'm actually not even sure what those terms really mean.

I'm amazed that people can build homebrew computers using Z80, MOS 6502/6510, or Motorola 68k series microprocessors.

I could buy one of those homebrew Z80 kits and assemble it, but I don't think I'll gain much real understanding just from the act of assembling the kit.

I have long wondered how things work down to the bare-metal level. What does it take to be able to design (and program?) a homebrew computer? Let's say I leave out the FPGA route and only focus on using real, off-the-shelf ICs.

I'm mostly just a software guy (I write Python, I've learned automata theory at University) but I haven't learned any real electronics beyond things like Ohm's law, KCL, KVL, capacitance, impedance as a complex number. I don't even know my transistors or diodes.


This book starts you off with NAND gates and then has you build an ALU, CPU, assembler, and finally a Java-like-language compiler.

The Elements of Computing Systems / From NAND to Tetris https://www.nand2tetris.org/

The first six textbook chapters (all the hardware stuff) are available on the website in the 'Projects' tab, you should buy the book for the other six. All 12 chapter assignments are available on the website. The Software section has a hardware simulator to test your designs.


I'd like to second this -- I'm slowly making my way through Nand2Tetris (currently building a VM -> assembly transpiler, and if you'd asked me to do that a few months ago, I'd have had no idea what you were talking about), and it's been really fun, taught me a lot about low-level computing, and given me the opportunity to learn a new language (I'm building everything in C). I highly recommend it!


I think you'd find that once you dig into those old machines it's a lot simpler than you think. The 6502 for example is pretty easy to interface. Electronics knowledge isn't much needed at first, as it's mostly just logic. You can make a simple "computer" on a breadboard that just cycles a no-op through all of memory: https://coronax.wordpress.com/2012/11/11/running-in-circles/

From there it's not a tall order to start interfacing SRAMs and some EPROMs and a UART chip and you can get a machine you can connect to over serial.

Just try! It's worth it!


The Z80 is very easy to drive on a breadboard - since it just needs a clock. Add a few LEDs to show address-line accesses and you're done:

https://www.youtube.com/watch?v=nFIviiwPrLI

The Z80 has RAM-refresh support built-in, and can be hand-clocked if you wish making it one of the simplest processors to run - in terms of required external components.


The hand clocking is the magic of the Z80 for me. That you can manually toggle a button to step through every tiny change is so cool.


You should check out https://eater.net/ and https://www.nand2tetris.org/. Both breakdown a lot of the complexity around building a homebrew computer using off the shelf components.


Ben Eater's Youtube series building a simple CPU from scratch was a real "lightbulb" moment for me. After 40 years of programming (including Z80 assembly back in the 8-bit days), it took my understanding of the CPU from a mysterious black box to what's basically a simple look-up table and some control signals.


There are some popular textbooks on computer architecture; Patterson's stuff is good, but there are better texts. Some techno-hobbyist sites (like Ars Technica) do CPU reviews that get into various microarchitectures. Datasheets from processor manufacturers are free and often have interesting details.

Going after things at a really low level, books like The Art of Electronics take you from transistors (and through things you probably don't care about, like power supplies and amplifiers) to digital electronics, to simple computer system design. You can probably find value by skimming the component-level stuff, skipping the goopy analog chapters, and just reading the digital electronics sections. (This is more or less how I started with personal computers in 1974 or so).


Could you elaborate on the "better texts"?

Can you give some examples and why/in what respect they are better than (David) Patterson's books? Or are those "better texts" the Ars Technica CPU reviews and manufacturer datasheets that you mentioned in the subsequent sentence?


I liked John Shen's Modern Processor Design and Noam Nisan's The Elements of Computing Systems.

The websites I mentioned have hobbyist-level details that are pretty interesting (but in general won't teach you much about things like concurrency hazards and branch prediction and so forth).


Also read the book: `Code` by Charles Petzold to complement the material from NAND to Tetris.


Back in the day there were a suprising number of good technical books to learn from. I kicked off with two books from Tandy (Radio Shack): [0]

The TRS80 Technical Reference Manual (1978). This had the full schematics of the computer, explanations, timing diagrams, fault finding tips etc.

Understanding Microprocessors (1979). This taught basic TTL gates, how flip-flops worked, shift-registers, adders and all the other basics.

In my opinion learning this stuff gives you a better understanding of coding, especially with low level languages.

[0] http://www.trs-80.com/wordpress/books/140213/


I happen to be a computer architect both professionally and as a hobbyist, but the basics of it really aren't that complicated. All of the _hard_ parts are people being terribly clever to eek out the last bit of performance in a given technology. Getting any modern <$100 FPGA board and designing a really simple processor is definitely do-able as a complete novice. I haven't read it, but I've heard good things about the Nand2Tetris book someone else in this thread mentioned.


> All of the _hard_ parts are people being terribly clever to eek out the last bit of performance in a given technology.

Is that a skill that's related to how you do optimisation in certain puzzle games? E.g. TIS-100, Opus Magnum, Factorio. Or not really?


I don't see a reply to this and not sure if any is forthcoming, but just on the off chance there isn't: at that level anything analog and when you're optimizing for multiple variables at once (power consumption, clock speed, reliability, gate count) it gets complicated in a hurry.

The big breakthroughs were the ability to simulate hardware for larger circuits at higher frequencies realistically to determine how to make those circuits stable rather than to get them to oscillate wildly at the frequencies they operate at. Once you develop a bit of appreciation for what is required to make a 100 gates work reliably at 10 MHz or so you can begin to understand the black magic required to make billions of transistors work reliably at frequencies in the GHz.

So the skill is not at all related to puzzle games, but it is a puzzle and the tools and knowledge required were usually hard won. At those frequencies and part sizes anything is an antenna, there is no such thing as a proper isolator, coils, resistors and capacitors all over the place even if you did not design them into the circuit (parasitics).

If this sort of stuff interests you I highly recommend the 'Spice' tutorials, and/or to get an oscilloscope and some analog parts to play with.


Is that what “high speed digital design” is? Like the stuff that Dr Howard Johnson teaches? Or is that not the same scope or maybe still something entirely different?


"High Speed Digital Design" generally refers to PCB level design. The Black Magic books are pretty old, covering ancient stuff like DIP packages on manhattan-routed boards. These days the keyword is SI/PI (Signal Integrity, Power Integrity).


I'm not familiar with that book but I just looked at some previews and it seems to be exactly what I'm getting at. I built a large number of radio transmitters in my younger years (long story) in the 100 MHz range, that was all analog so I had a pretty good idea of what it was like to design high frequency stuff, or so I thought. Then I tried to do a bunch of digital circuits at 1/10th that frequency and even after only a hand full of components in a circuit you'd get the weirdest instabilities. From there to the point that you can reliably design digital circuitry is a fascinating journey and gives you infinite respect for what goes on under the hood of a modern day computer.


Yes, and a reason for it subtitle "A Handbook of Black Magic" :). In the olden days EEs would more or less guess by intuition and heavy sprinkling or randomly placed pullups/pulldowns/capacitors to force designs into stable working order.

https://hackaday.com/2019/01/24/video-putting-high-speed-pcb...

Dont remember the exact video, but Bill Herd mentioned many times about on site last minute fab fixes involving prodding the product on a hunch of where the problem might be.

Today you can simulate and measure pretty much everything, plus automated design rule tools will warn you of potential problems beforehand.


As a skill, I think it’s just called “engineering in a competitive market.” Anything you can do to make your chip more attractive to customers is a win, so designers go to great lengths to do so. Things like caches, pipelining, branch predictors, register renaming, out-of-order execution, etc. aren’t in any way fundamental to how processors work, they’re just tricks to make your program run a few percent faster. Designing a simple CPU in an fpga that executes everything in a single cycle and uses on-chip block ram is probably only a few hours of work for a novice. It would probably take you longer to download and install the fpga software!


A few responses pointing to Ben Eater's videos. I have only a rudimentary understanding of electronics. I built my first 6502 computer following his first two videos. Ben's able to strip away a lot of things you don't need to know immediately to get up and running. Supplementing this with reading sites like 6502.org has built a lot of confidence for me to go beyond what Ben's released so far.


There’s a lot of good videos on YouTube such as “Hello,world” from scratch on a 6502:

https://youtu.be/LnzuMJLZRdU


I'm pretty sure you don't need that beefy of an FPGA board to accomplish this; there are cheap chinese Artix A7 100T boards with 100,000LUTs, almost 512k of block RAM, and 256MB of DDR3 RAM available for under $100 USD; I doubt a Z80 core would take more than a couple thousand LUTs.

For my own project I have a PicoRV32 RISC-V core running at 150mhz, with HDMI output, character set generator, sprite graphics generator, bitmap graphics output, and a UART all in under 8000 LUTs...


From the article:

    > As I’ve mentioned before, all of the logic so far is 
    > pretty lightweight, occupying a mere 7% of on-chip 
    > logic resources
The author is well aware the board is overkill. He also mentions elsewhere he used it because he happened to have it on hand, not because it's a particularly good fit.


you can score comparable Kintex 7 chinese dev boards for under $200 https://www.eevblog.com/forum/fpga/a-xilinx-kintex-7-board-o...


The lovely plywood case reminds me of the luxurious Z80 based NorthStar Horizon! (Nee "Kentucky Fried Computers".)

https://en.wikipedia.org/wiki/North_Star_Horizon

https://en.wikipedia.org/wiki/North_Star_Computers


Nothing says “the 1970s” quite like wood paneling :-D


This really reminds me of TIS-100 which is a game by Zachtronics where you inherit a weird computer with a bunch of minimalistic cores that you have to program in assembly language. I really enjoyed playing it for a while until it started feeling too much like real work.


A small part of me was hoping this would contain 16x physical through-hole Z80 chips. Alas, 'tis FPGA.


As far as I know, the fastest production Z-80 chip (the Z-80H) ran at 8MHz. This one runs at 84MHz per CPU!

I switched from 8-bit (Z-80B) hardware to 16-bit (i386) hardware once the 16-bit PC could emulate the Z-80 faster than any physical Z-80 processor could run. I feel fortunate to have skipped the 8088 and 80286 PCs which all ran slower than my Z-80B for most things.

I kept one of my old CP/M machines in my parent's garage for 10 years, and then tried to resurrect it. It was an IMSAI 8080 with CompuPro hardware that I had modified to allow the front panel to work while allowing the system to run at 4MHz. I had also implemented software configurable baud rate and start/stop/parity selection by hacking the hardware and customizing the BIOS.

Unfortunately when I resurrected it, it booted and ran for only a short time before the ST851 hard drives scraped the oxide coating from the system disks. (I destroyed all of the backups while troubleshooting.)

I gave the whole system away before I discovered that they are actually still quite valuable, but I have no regrets. I much prefer a more modern computer.


If you expand that to Z80 compatible chips, the fastest was (is, possibly, as I think it's still being made) the Zilog eZ280 at 50 MHz -- which, given that it runs at about three times as fast as the Z80 at the same clock frequency, is like having a 150 MHz Z80.

https://en.wikipedia.org/wiki/Zilog_eZ80

My TRS-80 Model 4 back in the day (circa 1988, I think) had been upgraded with a HD64180 CPU, which apparently could run at least at 10 MHz -- although I'm pretty sure the third-party expansion board I was using ran it at 6.144 MHz.


You may be aware, but to add to your comment...

Good ol' Z80s are still in active production as well, and you can buy a 20MHz chip in a DIP, LQFP, or PLCC package.

Anyone who is interested should check out the RC2014 and retrocomp google groups. Lots of discussion there, and there are a few people selling Z80 and Z180 computer kits.

https://groups.google.com/forum/#!forum/rc2014-z80 https://groups.google.com/forum/#!forum/retro-comp


> I switched from 8-bit (Z-80B) hardware to 16-bit (i386) hardware once the 16-bit PC could emulate the Z-80 faster than any physical Z-80 processor could run.

Isn't the i386 the first 32-bit x86 processor, not 16-bit?


The 386SX was a 16-bit CPU capable of 32-bit operations. Think of it in the same context as the 8088 vs. 8086.


As I recall, it was a low cost 32-bit CPU essentially identical to the original 80386 (subsequently rebranded 80386DX to further distinguish it from the SX) which was introduced several years earlier and is the main CPU referred to when people say i386 without qualification, except that it had a 16-bit data bus and a 24-bit address bus.

> Think of it in the same context as the 8088 vs. 8086.

Yes, both of those are 16-bit but the 8088 is a later low-cost version with an 8-bit data bus, but an identical execution unit to the 8086.


It is the other way around: the 386SX is 32 bit chip with a 16-bit data bus (plus the jiggery-pokery to convert one 32-bit request from the internals into two 16-bit ones) and 24 bit address bus, so it could be used on cheaper boards originally designed for 286s with minimal modifications. It is still considered a 32-bit chip.

Same for the 8088 - that was still a 16-bit chip like the 8086 (almost identical, in fact), but with a smaller (8-bit) external data bus so 16-bit requests to/from memory had to be split in two.


The replies to my above comment have shown that I was not at all clear about the meaning of my analogy. The 8088 was considered a 16-bit CPU, but it used an 8-bit memory bus. Thus two (or more) memory fetches were required for each instruction. The 80386SX was considered a 32-bit CPU, but it used a 16-bit memory bus. Thus the same memory bandwidth issue that applied to the 8088 applied to it as well.

Imagine an 8088 running at 4.77MHz trying to keep up with a Z-80 running at 4MHz. The Z-80 always won because of the memory bandwidth issue.


We are not disagreeing with the "386SX is to 386[DX] as 8088 is to 8086" part, or the effect on memory bandwidth.

But the "386SX was a 16-bit CPU capable of 32-bit operations" is simply an incorrect statement, which we are correcting. The 386SX was a 32-bit CPU with a 16-bit data bus.

Your new Z80 comparison seems flawed too: while it supports some 16-bit operations over paired 8-bit registers, including 16-bit load & save, the Z80 is generally considered to be an 8-bit CPU (so it is an 8-bit CPU capable of 16-bit operations) and in any case it has an 8-bit data bus (see the diagram in https://www.petervis.com/electronics/CPU_Processors/Zilog_Z8..., note that there are only 8 data lines, D0 through D7) so it can't pull/push 16-bits at once externally any more than the 8088 can.


Okay. From an architecture point of view, I agree with you. The context of my post related to the databus width and the speed implications of it. Sorry I wasn't clear in my meaning.


Wrong. The 386SX is a 32-bit CPU with a 16 bit external data bus. It is fully 32-bits internally, same as the DX model.


> I feel fortunate to have skipped the 8088 and 80286 PCs which all ran slower than my Z-80B for most things.

I might believe that compared to an 8088 running at 4.77 Mhz like in the original PC but the 286 is in another league entirely.


The original IBM AT computers ran at 6MHz. (They later upped this to 8MHz.) Even at 6MHz and a 16-bit bus, they were very close to the same performance of a 4MHz Z-80A.


Or if it was "discovered" that bending the Z80 pins inward a bit and stacking them 16 high over each other actually works, and would have worked even back in the 70s. That would be surreal.


I did that with memory on more than one occasion. Stacking them more than 4 high they'd tend to overheat.


Wouldn't you have to at least wire the chip select pins somewhere distinct for that to work? I know little about electronics.


Yes, exactly that, you bend out one pin per chip and wire that up to some decoding logic. But all of the other pins are parallel.


Nice - I've always wanted to run all Scott Adams adventure games at exactly the same time.


I think you meant Douglas Adams. Also, AFAIK he never released any for CP/M. Those began in the PC era.


No, he meant Scott Adams. Not the Dilbert guy.

https://en.wikipedia.org/wiki/Scott_Adams_(game_designer)

The Scott Adams adventure game series was published around 1978-83, and started on the TRS-80 Model I. I don't know if there were ever official CP/M ports, but they were certainly of that era.


Thanks for the clarification! I hacked TRS-80 Model I hardware (for a friend), but I never cared much for TRSDOS or Lifeboat so I have missed out on that software experience.

I guess this was a port of the classic Adventure game to Tandy. I had the CP/M version (550 point Colossal Cave Adventure). I had played it even earlier on RT-11 where it took about half an hour to load from two 8" floppies.


The HHGttG game was directly released for CP/M: https://en.wikipedia.org/wiki/The_Hitchhiker%27s_Guide_to_th...

‘Bureacracy’ seemingly wasn't: https://en.wikipedia.org/wiki/Bureaucracy_(video_game)

...but like HHGttG it targets the Z-Machine, and third-party interpreters for Z-Machine files have been written for probably every platform under the sun. It's also possible that executables from HHGttG can be repurposed. This post seems to support the latter approach: https://retrocosm.net/2011/01/03/z-machine-interpreter-for-c...

Both games are maddeningly hard, btw. I don't think I finished either of them.


... and you can't really finish them without access to the original material - some game puzzles can only be solved using accessories available in the box. A gentle kind of copy-restriction, that didn't stop you from making backups.

Actually, it's the BBC controlling us from London.


That is not a cheap FPGA --- I couldn't find the exact model but other Stratix IV GX are in the $10k range.


The author linked the development board, either $4,495 or $5,995, but he did say he got it second hand I think:

https://www.intel.com/content/www/us/en/programmable/product...


Definitely, although they're using basically none of the logic resources. You could probably get the bulk of the result with an Artix 7 and a memory controller (instead of using all those large BRAMs). Or just an Artix 7 and not allocating 64kB of BRAM for each core.

$129 for an Artix 7 development board is a steal [1].

[1] https://store.digilentinc.com/arty-a7-artix-7-fpga-developme...


Yeah, if you just replaced the 64KB of RAM with a 4KB cache backed by DRAM, you could probably fit 64 cores in one of those no problem, but 1) the performance wouldn't be quite as good, and 2) this project was mostly an exercise in excess since I was trying to find something to do with my giant FPGA board.


Sorry, to be clear, that was by no means a criticism of what you did. I was just pointing out that this kind of thing is more accessible to people than they may think when they get sticker shock re: the cost of that particular FPGA you built on.


I'm guessing that simple one-way ring IPC mechanism is going to be a bottleneck in a hurry as you add more processors.


It's current manifestation is very much a 'good enough' solution. I just wanted a scheme for simple all-to-all connectivity to map the serial ports to, and the only traffic it carries at the moment is either terminal traffic or disk data from the SD card, so it's not even close to being taxed. The obvious improvements are to add deep buffers to hide the latency of credit return, crank up the clock and to make the bus wider. This board could probably handle a 200-bit wide bus at 100 MHz+ speeds with no problems. Computer architecture is a fun hobby =)


I’m just amazed at how many people dig the Z80 as much as I do. It’s so cool seeing all these projects on my favourite CPU.

I mean this stuff would have seemed absolutely incredible to my 18 year old self. And so very cool to see a reference to MP/M. Though I seem to recall needing to use 8080 mnemonics to work on MP/M?


I bet a lot of us got our start doing low-level programming for the Z80. I remember learning to program assembly from the manual for a TI calculator. I'm pretty sure those things are still ubiquitous. It's probably the easiest way to learn assembly programming on real hardware anymore.


Having no other resources, as a kid I sent Zilog a letter asking for a copy of their Z80 docs. Probably wrote it in crayon.

Bless them, they sent it, and that's how I got started learning microprocessor architecture.


I have the original Z80 programmer's quick reference around at home, still.

If I was smart enough to know how to post a photo to HN then I would do so :)


This makes me smile!


Me too. It was essentially this, albeit a very early version: http://www.z80.info/zip/z80cpu_um.pdf

May the gods help any kid that tries to learn Intel CPU architecture that way these days. :-)


"Well Johnny, the first thing to understand is how time travel works. You see, our CPUs guess at what you wanted to do next and start executing the pipeline before they know for sure, and sometimes it's wrong, and..."


Received my first home computer, a Sinclair ZX Spectrum, in 1982 or so. No working cassette drive in the house, so of course I started reading the manual. That taught me BASIC. Later I learned assembly language programming, partly via the manuals, and partly via some books at the local library.

I'll always have a soft-spot for Z80 assembly, and have started to work on raw Z80 processors myself now - after spending the past year or two playing with Arduino and ESP8266 devices.


Sound a lot like me in the 80s. Hard to find information. Came across random books in the library.

My latest experience is with AVR assembly. What is it about Z80 that makes you prefer it? Wider variety of practical instructions? Cleaner design?

This blog got me curious about looking into Z80. Not a chip I ever learned anything about. I started out as an Amiga guy and I would occasionally come across books and stuff about 6502. Z80 is just a name I heard from time to time.


The Z80 I enjoy because of nostalgia, largely. I started with a Z80-based system:

https://blog.steve.fi/how_i_started_programming.html

The instruction set is simple, and clean. The fact that you don't need much supporting circuitry to drive it is a nice bonus, but mostly I remember coding the thing when I was 12 or so, and now I know what I'm doing for real :)


I have been a bit fascinated by this, but perhaps you could explain to me what fascinates you about Z80. Just for reference I am a Motorola 68000 fan, mostly because I grew up with Amiga. My first computer was an Amiga 1000. After Amiga Basic some of the first programming I did was 68000 assembly.

I really liked it, and I absolutely hated x86 assembly when I tried that later. x86 seemed so quirky to me, with its segment registers and the naming of assembly instructions just didn't seem as nicely organized as for 68000, which almost felt like high level language in comparison.

x86 turned me off assembly for years. I have gotten some interest again these last years by playing around with AVR. I like AVR for the simplicity and low price.

Perhaps you could sell me on Z80, how does it compare to using AVR chips?


I love the M68K, in fact I almost said it was my favourite, but I think the Z80 taught me more.

Like the 68K, the Z80 had great mnemonics, was really easy to understand. In many ways I think it was at least a spiritual predecessor to the 68K. I mean, Z80 was an 8 bit CPU and 68K was much more capable 16 bit CPU. My guess is that if you liked the 68K you'd probably like the Z80.

Intel mnemonics always sucked :) 8086 and 80286 put me off assembly language too.

I don't know the AVR... don't get to play with CPUs much any more :(


MP/M ran fine on the Z-80. Only the mnemonics were different between Intel and Zilog. The byte-level machine code of the 8080/8085 would run just fine on the Z-80.


I was once blessed with a z80 c compiler, hosted on msdos but targeting cp/m (maybe it was ported for use with embedded targets). Its output was 8080 assembler with the odd db here and there when it wanted to emit a z80 instruction. It shipped with a perfectly good z80 assembler, mind; the compiler just emitted 8080 mnemonics because reasons.


I always thought that the Zilog mnemonics were better than Intel's. I learned Intel first, but immediately recognized that Zilog's were better once I became aware of them.

The only C compiler I had on CP/M was BDS C. It was fairly limited and had no floating point support whatsoever. Maybe that's the one you're thinking of?


This one was called Aztec and Manx C, depending on what part you looked at. It lacked bit field struct members and I think a couple of other features but don't remember now.

I learned Zilog first and agree with you. I had to keep an 8080 manual handy for a while to make any sense out of that compiler output.

Good(?) times.


I used Aztec C for the 68k on some embedded projects for work in the late 80's. The 68k version was a full-featured C implementation complete with IEEE-754 floating-point math and bit fields. I liked the fact that Aztec included the source code for parts of their libraries so one could write their own drivers for stdio devices. I still have that code somewhere...


yes - but as a Z80 programmer it was hard to use the 8080-oriented tooling on the MP/M machines. I mean there was a direct correspondence from 8080 to Z80 but man... 8080 was ugly.


Only slightly related but I did a test of my emulator front end awhile ago running several hundred ZX Spectrums https://youtu.be/BjeVzEQW4C8


That is pretty amazing! I managed to get four emulator instances running CP/M inside of Unreal Engine. Could have added more but I'd have needed to build a bigger spaceship.

https://imgur.com/Cv8HGIz

Close up of disk drives and an instance running Wordstar:

https://i.imgur.com/rIY1he8.png


Is your project inspired by 0x10c? The graphics look amazing. I ended up using Irrlicht which .. does not look amazing.. but it has support for dynamically loading arbitrary 3D models which was more important to me for now since I am focused on Lua scripting for it.

I added a physics engine and started working on a demo with a jet/ship thing controlled with a C64 assembler program but it had a lot of work left and I will have to go back to it after I get more basic stuff going.


Yes partially inspired by 0x10c, I used to post my updates on that subreddit a year or two back. Thank you for the compliment.

I integrated color VT100/ANSI support into the terminal emulator and had basic TCP/IP network working with a simple telnet+zmodem client as well. So you could log into BBS's on the Internet and download files to the CP/M systems:

https://i.imgur.com/6V97v57.png

https://i.imgur.com/fhqef4c.png

https://i.imgur.com/K44GT1W.png

Mine was really a prototype, since then I've been working on a 68000 based emulator and an OS for it which I might integrate back into UE4 day. The other issue I ran into was how to go about simulating the spaceflight part of it. Somebody gave me a pointer about that though earlier today so I might have to get back into it.


That's brilliant! I could recognize Abu Simbel Profanation, and maybe Elite? My first steps learning programming were Basic and then Z80 assembly language on a ZX Spectrum...


Any plans for VR support?


Originally VR was the whole idea, but for now I am focused on finishing up the Windows desktop alpha version and getting some kind of interest from other programmers or users. Going to do another release within a few weeks. If I can see any evidence of maybe 1-5 people out of the billions on the planet playing around with it, that may be enough encouragement to get into the VR stuff. So far there is no certain evidence that anyone other than my brother launched the program.


I will be launching the program sometime tomorrow for sure.


OK don't put too much time into it if you have problems. The version on the website is ancient and may be a bit crap. New one coming within a week or two.


This thread should be summarized and reposted as an article by itself. So much interesting stuff here.


How does the speed measure up to theoretically emulating 16 Z80 cores on a Raspberry Pi?


https://floooh.github.io/2017/12/10/z80-emu-evolution.html Based on this article I think RPi will not handle cycle accurate emulation at this level of performance, but it could easily do instruction level emulation.


Pi 4 is quad core 1.5 GHz. One core could probably emulate at least 400 MHz of Z80. Four cores should handle the full load.


Loved the Z80 on my Amstrad CPC 464! I used an assembler typed in by hand from a magazine.

Geekiest accomplishment: A disk loader for Spindizzy. The game was too large for the main memory with the disk driver loaded.

Had to use the video buffer to load the game in two parts and splice them together (overwriting the disk driver). Fun times!


I would love to have a computer based on Z80. One of my happiest times were with ZX Spectrum. If I had a laptop with reasonable screen and keyboard, and text mode, some kind of linux Arch or something. It would be perfect.

I would not pay $1000 for experience, but some more reasonable amount, totally.


This is kind of what you are after... But it is 6502 not Z80.

https://www.c64-wiki.com/wiki/Commander_X16


Someone needs to recompile zork to make use of this, just image the time assisted speed run you could get ;)

Zork TAS https://www.youtube.com/watch?v=cIXUSyC9dpM


"Future Plans (...) Joystick-like pointing device" - oh man, laptop with joystick instead of trackpad would be so weirdly cool - like a gadget from Kung Fury, awesome.


TrackPoint is basically a joystick.


Ted Selker, who developed the Trackpoint at IBM Almaden Research Lab, always calls it the "Joy Button", but IBM just wouldn't go with that. But at least they made it red (thanks to industrial designer Richard Sapper)!

TrackPoint (2011) (microsoft.com) [Buxton Collection]:

https://news.ycombinator.com/item?id=9437780

https://www.microsoft.com/buxtoncollection/detail.aspx?id=60

Some stuff I wrote about the Trackpoint and relative input devices in general:

https://news.ycombinator.com/item?id=18340403

>Input devices should allow users to configure a velocity mapping curve (which could be negative to reverse the motion). X-Widows had a crude threshold-based mouse acceleration scheme, but it's better to have an arbitrary curve, like the TrackPoint uses, that can be optimized for the particular user, input device, screen size, and scrolling or pointing task at hand.

>https://wiki.archlinux.org/index.php/Mouse_acceleration

>One of the patented (but probably expired by now) aspects of the Trackpoint is that it has a very highly refined pressure=>cursor speed transfer function, that has a couple of plateaus in it that map a wide range of pressures to one slow or fast but constant speed. The slow speed notch is good for precise predictable fine positioning, and the fast speed notch is tuned to be just below eye tracking speed, so you don't lose sight of the cursor. But you can push even harder than the fast plateau to go above the eye tracking plateau and flick the cursor really fast if you want, or push even lighter than the slow plateau, for super fine positioning. (The TrackPoint sensor is outrageously more sensitive than it needs to be, so it can sense very soft touches, or even your breath blowing on it.)

Ted Selker demonstrated an early version of the "pointing stick" in 1991:

https://www.youtube.com/watch?v=6hhnlaUxsL8

Ted always reminds me of another distinguished "Button Man": Mr. Lossoff from Nero Wolfe's "Mother Hunt". Those Button Men can be pretty intense!

https://youtu.be/UBOG5556_0g?t=12m5s


> it's better to have an arbitrary curve, like the TrackPoint uses, that can be optimized for the particular user

Maybe you could even learn the curve by counting pointing error (measure the overshoot before clicking), and maybe make it two dimensional (i.e. the sensitivity and acceleration are different based on the direction).

I use a program called BrightML to do automatic screen brightness control. It's innovative in that it accounts not only for time of day, location, and ALS if you have it; but also considers which application is focused when you set the brightness. If there is an application which you always turn down the brightness for when it's focused, it'll turn it down in anticipation.

A similar thing could probably be done for the TrackPoint acceleration.


That's a great idea, especially since the ideal profile would differ between different apps (or different parts of the same app, like drawing area -vs- toolbar -vs- text editor).

Ted wrote some amazing stuff about adaptive curves and other cool ideas he tried, at the end of his whirlwind exposition to Bill Buxton:

https://www.microsoft.com/buxtoncollection/detail.aspx?id=60

>The ones that got away are more poignant. I designed adaptive algorithms we were exploring- we were able to raise the tracking plateau by 50% for some people, and if we had had the go ahead we could have made that stable and increased performance tremendously. It seems some people used tendon flex to improve pointing- we found this looked like overshoot and we had trouble making the adaptive algorithms stable in the time box we gave it. We made a special application that was a game people played to improve their pointing –as their game play improved the transfer function improved too! We made a surgical tool that used a Trackpoint to allow tremor-free use of a camera from a laparoscope. We made a selector for the FAA to do ground traffic control that saved a multihundred million dollar contract for IBM with the government. I would have loved the product to change cursor movement approach for form filling, text editing and graphical applications; again that could probably double performance. Yes, I s till want to do it NOW. We built a gesture language into the Trackpoint that can be accessed in the firmware; the only aspect the driver exposes is Press to Magnify or Press to Select. We created probably two dozen haptic feedback Trackpoint designs. They improved novice performance and were loved by the special needs community. I did a preliminary study that showed how novices selected faster with it; the product group saw no need to spend the money for that. We made a science experimental test bed to teach physics that never shipped; we made many versions of multi-Trackpoint keyboards that never shipped. We made many other things too- a versatile pointing device for the table called Russian tea mouse allowed for full hand, thumb, finger, or in-the-palm use. We made pen like stalks that allowed selection without taking hands off the keyboard. We made devices that used one set of sensors to run two input devices. We made an electormechanical design used by one special user. We found that brushing th e top instead of pressing it could give amazing dynamic range, at the expense of having to cycle the finger for long selections. The joystick for this had no deadband, it had an exquisite sensitivity and control … we never made an inkeyboard device that shipped with this alternative set of algorithms and scenario. I designed better grippy top ideas that never made it; also better sensitivity solutions that never made it too. And I hate to say it but there are many other improvements that I made or would like to make that I could elaborate further on but will stop here…..


The relevant xkcd: https://xkcd.com/243/


Vis-à-vis mamelons, Ted actually build a prototype Thinkpad keyboard with TWO trackpoints, which he loved to show at his New Paradigms for Using Computers workshop at IBM Almaden Research Lab.

While I'm not sure if this video of the 1995 New Paradigms for Using Computers workshop actually shows a dual-nippled Thinkpad, it does include a great talk by Doug Engelbart, and quite a few other interesting people!

https://www.youtube.com/watch?v=oD9NUWDCyHI

The multi-Trackpoint keyboard was extremely approachable and attractive, and everybody who saw them instantly wanted to get their hands on them and try them out! (But you had to keep them away from babies.) He made a lot of different prototypes over time, but unfortunately IBM never shipped a Thinkpad with two nipples.

That was because OS/2 (and every other contemporary operating system and window system and application) had no idea how to handle two cursors at the same time, so it would have required rewriting all the applications and gui toolkits and window systems from the ground up to support dual trackpoints.

The failure to inherently support multiple cursors by default was one of Doug Engelbart's major disappointments about mainstream non-collaborative user interfaces, because collaboration was the whole point of NLS/Augment, so multiple cursors weren't a feature so much as a symptom.

Bret Victor discussed it in a few words on Doug Engelbart that he wrote on the day of his death:

http://worrydream.com/Engelbart/

>Say you bring up his 1968 demo on YouTube and watch a bit. At one point, the face of a remote collaborator, Bill Paxton, appears on screen, and Engelbart and Paxton have a conversation.

>"Ah!", you say. "That's like Skype!"

>Then, Engelbart and Paxton start simultaneously working with the document on the screen.

>"Ah!", you say. "That's like screen sharing!"

>No. It is not like screen sharing at all.

>If you look closer, you'll notice that there are two individual mouse pointers. Engelbart and Paxton are each controlling their own pointer.

>"Okay," you say, "so they have separate mouse pointers, and when we screen share today, we have to fight over a single pointer. That's a trivial detail; it's still basically the same thing."

>No. It is not the same thing. At all. It misses the intent of the design, and for a research system, the intent matters most.

>Engelbart's vision, from the beginning, was collaborative. His vision was people working together in a shared intellectual space. His entire system was designed around that intent.

>From that perspective, separate pointers weren't a feature so much as a symptom. It was the only design that could have made any sense. It just fell out. The collaborators both have to point at information on the screen, in the same way that they would both point at information on a chalkboard. Obviously they need their own pointers.

>Likewise, for every aspect of Engelbart's system. The entire system was designed around a clear intent.

>Our screen sharing, on the other hand, is a bolted-on hack that doesn't alter the single-user design of our present computers. Our computers are fundamentally designed with a single-user assumption through-and-through, and simply mirroring a display remotely doesn't magically transform them into collaborative environments.

>If you attempt to make sense of Engelbart's design by drawing correspondences to our present-day systems, you will miss the point, because our present-day systems do not embody Engelbart's intent. Engelbart hated our present-day systems.


For what it's worth, supporting multiple cursors on any 90s or later system, save for dragging/selection, is prettymuch trivial. If they're just pointing and clicking, then you can use pointer warping and overlays/sprites to create the effect.


You can directly support multiple mice and other input devices by talking to the USB driver directly instead of using the single system cursor and mouse event input queue, and drawing your own cursors manually. (And DirectX / Direct Input lets you do stuff like that on Windows.) But you have to forsake all the high level stuff that assumes only one cursor, like the user interface toolkit. That is what Bret Victor meant by "bolted-on hack that doesn't alter the single-user design of our present computers".

That's ok for games that implement their own toolkit, or use one you can easily bolt on and hack, but not for anything that needs to depend on normal built-in system widgets like text editors, drop-down menus, etc. (And if you roll your own text fields instead of using the system ones, you end up having to reimplement all kinds input methods, copy/paste/drag/drop, and internationalization stuff, which is super-tricky, or just do without it.)

I implemented multi-player cursors in SimCityNet for X11 (which opened multiple X11 displays at once), but I had to fix some bugs in TCL/Tk for tracking menus and buttons, and implemented my own pie menus that could handle being popped up and tracking on multiple screens at once. TCL menus and buttons were originally using global variables for tracking and highlighting that worked fine on one X11 display at once, but that caused conflicts when more than one user was using a menu or button at the same time, so it needed to store all tracking data in per-display maps. It doesn't make sense to have a global "currently highlighted button" or a "current menu item" since two different ones might be highlighted or selected at once by different people's cursors.

And I implemented special multi-user "voting" buttons that required unanimous consent to trigger (for causing disasters, changing the tax rate, etc).

I had to implement multi-user cursors in "software" by drawing them on the map manually, instead of using X11 cursors.

I wrote about that earlier in a discussion about Wayland and X-Windows:

https://news.ycombinator.com/item?id=19732733

And it's in the direction of multi-user collaboration that X-Windows falls woefully short. Just to take the first step, it would have to support separate multi-user cursors and multiple keyboards and other input devices, which is antithetical to its singleminded "input focus" pointer event driven model. Most X toolkits and applications will break or behave erratically when faced with multiple streams of input events from different users.

>https://tronche.com/gui/x/xlib/input/XGrabPointer.html

For the multi-player X11/TCL/Tk version of SimCity, I had to fix bugs in TCL/Tk to support multiple users, add another layer of abstraction to support multi-user tracking, and emulate the multi-user features like separate cursors in "software".

Although the feature wasn't widely used at the time, TCL/Tk supported opening connections to multiple X11 servers at once. But since it was using global variables for tracking pop-up menus and widget tracking state, it never expected two menus to be popped up at once or two people dragging a slider or scrolling a window at once, so it would glitch and crash whenever that happened. All the tracking code (and some of the colormap related code) assumed there was only one X11 server connected.

So I had to rewrite all the menu and dialog tracking code to explicitly and carefully handle the case of multiple users interacting at once, and refactor the window creation and event handling code so everything's name was parameterized by the user's screen id (that's how you fake data structures in TCL and make pointers back and forth between windows, by using clever naming schemes for global variables and strings), and implement separate multi-user cursors in "software" by drawing them over the map.

Multi Player SimCityNet for X11 on Linux:

https://www.youtube.com/watch?v=_fVl4dGwUrA

>Demo of the latest optimized Linux version of Multi Player SimCity for X11. Ported to Unix, optimized for Linux and demonstrated by Don Hopkins.

X11 SimCity Pie Menus:

https://www.youtube.com/watch?v=Jvi98wVUmQA

Multi-user menu tracking (added "@$screen" parameterizations):

https://github.com/SimHacker/micropolis/blob/master/micropol...

Opening multiple X11 displays (multiple toplevel "head" windows per screen, each with a unique id, using $win parameterization):

https://github.com/SimHacker/micropolis/blob/master/micropol...


not really, for example context menus in Xorg are modal windows, it would be very hard to show two menus at the same time


That's right. Implementing multiple cursors in X-Windows, Windows or MacOS is anything but "prettymuch trivial". You basically have to re-implement parts of the window system and user interface toolkit yourself, and throw away or replace all the (extremely non-trivial) services and features of all the code you're replacing (including extremely tricky bits like input methods, internationalization, drag and drop, cut and paste, etc).


Slight topic derail, I recently handed a thinkpad to a person who had grown up using tablets and laptops with "modern" touchpad pointing devices.

It's my older thinkpad I keep around to ssh into things. They were completely and utterly surprised by the presence of the trackpoint and found it to be extremely unusual.


At least for me, TrackPoint is one of those things that once it "clicks" is extremely hard to give up. Working on a laptop without a TrackPoint always feels clumsy and inelegant, even with all the improvements trackpads have seen. There's just something about the combination of a mechanical pointer and where it's placed relative to the keyboard that feels right.

All IBM's original patents on TrackPoint must surely have expired by now, so it always surprised me that we didn't see other OEMs start offering it as an option. There are a couple that have, but only on a few models, and by all accounts non-IBM/Lenovo TrackPoint implementations aren't quite as good for some reason. Disappointing!


I would gladly accept the job of reeling in the golden age of the TrackPoint at Dell; and I don't think it would be tremendously difficult, especially now that any patents on the greatest untapped innovations would by now have expired (sorry, inventors!).


I like track ball of the old apple MacBook more. Still opt for track ball to these days.


I might have stuck with my Kensington Slimblade, but I begun noticing the latency and the polling jitter (since, like most trackballs, it only runs at 125Hz) after switching to a 165Hz display, so I couldn't keep it.

If I had time to add 1000Hz polling to the thing, I'd probably still be using it.


> by all accounts non-IBM/Lenovo TrackPoint implementations aren't quite as good for some reason

Yeah. After many years with old X- and T-series IBM/Lenovo Thinkpads i got a Dell Latitude recently. It's pretty decent but the trackpoint (they call it "pointing stick") sucks in comparison. Also it's not red.

I thought about using at least the IBM dome on it, but they aren't compatible – IBM/Lenovo dome is higher and the square hole is bigger: https://i.vgy.me/u1SUKm.jpg


I used an IBM Thinkpad from 2003-2008 or so, but never got used to the trackpoint despite several attempts. Was always wildly inaccurate with it. The trackpad, though a god-awful rage-generation device, was still much more usable.


Supercool.. I like how basic the setup is for multiple cores, and how it can show you how accessing shared memory can work. It's very concrete, and visible.


Wish I had US$5000 FPGAs to just have fun with like this!


you can have one by buying Chinese dev boards from aliexpres https://www.eevblog.com/forum/fpga/a-xilinx-kintex-7-board-o...


Would be nice on the Pano G2 (100k/150k Spartan 6), which is a lot more affordable than this board ;-)

Mentally adds the ZedRipper right after Leon and RiscV


well that sounds like tons of fun. The ring bus sounds a bit like the communication channel in some disk channel controllers where each disk would get the instruction and pass it on to the next one on the chain if it wasn't for it.

I definitely want to see if I can bring something like this up on one of the many FPGA boards I have in my junk drawer!


There is a Smalltalk derived machine by one Jecel Assumpcao that uses this architecture as well.

https://www.researchgate.net/publication/309254446_Adaptive_...


My design used a 2D mesh built from register insertion ring networks. But that is a small implementation detail - the general idea is the same.


Hah! Thank you for the correction.


This is a new art form. I love it.


Like. Want.


Interesting. The original Intel 8086/8088 (iAPX 86) manuals discussed multiple processors too. Can't wait for a Concurrent CP/M-86 "NET" version done with 74ALS/ALVT/LVT/LVX glue. ;) Or get really fancy by shoving all the glue into an FPGA.

https://archive.org/details/bitsavers_inteldataBrsManual_570...

Awesome 74xx/40xx series logic quick reference

https://www.mouser.com/catalog/supplier/library/pdf/STLogic....


A. Wow B. I don't want to put the creator in any way down, but always feel a little bit cheated by FPGA use.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: