Hacker News new | past | comments | ask | show | jobs | submit login
SIMH – Old Computer Emulator (wikipedia.org)
92 points by peter_d_sherman 12 months ago | hide | past | favorite | 63 comments



I found the selection of computers emulated to be interesting. Other than fairly popular (in their time) computers like various VAXen, also machines like the SDS 940 of which only a few dozens were ever made. What the the SDS 940 lacks in numbers it makes up in historical importance though (first node on ARPANET, host of first BBS, host used for the Mother of All Demos etc.).


I used it a while back to simulate something in the HP 1000 series.

Funny enough, I actually had a machine from that family, along with a variety of other more obscure and niche retro computers and other equipment, sadly I lost all of it, but the reality was it was just costing me a ton of money to get them working again.


The time sharing system on the SDS 940 (https://en.wikipedia.org/wiki/Project_Genie) led to TENEX on the PDP-10.


As cool as this is, I cant see it being the same experience as using a real Royal McBee jukebox style console with built in oscilloscope

https://upload.wikimedia.org/wikipedia/commons/e/e3/LGP-30_G...


I've had a lot of fun installing and running VAX/VMS on a simulated VAX 11/780.

Automatic versioning of files is still a very nice idea. The general format is

  FileName.Ext;1 
  FileName.Ext;2
  Etc.


Ugh... I can't get it to build under Termux on my phone.... I need gcc, and don't know how to get it installed. Walking around with a running VMS system in my pocket would be soooo cool. ;-)

My netbook has plenty of horsepower to run it... the phone should too.


Update.... I've got it running... and can telnet into the VAX/VMS 7.2 system in my pocket. ;-)


Same, got it emulating an old tape recovery image of a vax 3100 m76.


Sounds like a good idea, but it had little value in practice (for me). Since disk space came at a premium back then, it led to constant manual/auto purging.


There are areas (CAD) where it made a lot of sense.


It's so long since I was using VMS, but one thing I do remember is the constant purging. It was necessary.


You could set the number of versions to keep on a per directory level.


No one told me then and keep on purging it. Any command to do that just for old time sake?


for a single file (or directory, the syntax is the same). I've forgotten how much I love the HELP command on DEC systems. Having the old versions there saved my bacon more than once back in my student days.

  SET
  
    FILE
  
      /VERSION_LIMIT
  
            /VERSION_LIMIT[=n]
  
         Specifies the maximum number of versions for the specified file.
  
         If you do not specify a version limit, a value of 0 is used,
         indicating that the number of versions of a file is limited only
         to the Files-11 architectural limit of 32,767. When you exceed
         that limit, the earliest version of the file is deleted from
         the directory without notification to the user. For example, if
         you set the version limit to three when there are already five
         versions of that file in your directory, there will continue to
         be five versions of the file unless you specifically delete some
         or purge the directory. Once the number of versions is equal
         to or less than the current version limit, the version limit is
         maintained.


Actually running/using Multics is remarkable. I still want to try ITS to continue my computing archaeology journey.


I wonder what the challenges of emulating 36-bit machines on 32 and 64 bit systems are.

Can you even do JIT? You probably don't need to, considering how slow these PDPs actually were, but it's an interesting question nevertheless.


For ALU ops and address computations, if the 36-bit machines are also using the 2s-complements convention you can just do all computations in 64 bits and ignore the top bits.

For memory accesses you could simply round up the data bus width of emulated CPU to the next power-of-2 as the "host word size". For instance if the emulated CPU accesses memory as 12-bit words, use an uint16_t array on the host CPU and treat the top 4 bits as junk.

JIT-ing probably wouldn't make much of a difference with this approach as long as your host CPU is wider, since you just ignore the top-bits.

When writing a chip emulator, it's actually not uncommon to deal with 'odd' bit widths. For instance a sound chip might have 3- or 5-bit wide counters.

Also, in a language like Zig this stuff is quite convenient since Zig allows integer types of any width (e.g. you can have u3, u5, u36 etc...). Just have to be careful about how those types are represented in memory (e.g. bit-packed or not).


Quite a bad thing to contribute and make the whole project to change name as openxxxx. Sad. Do not aware of this last time working on one project.


Someone on HN must have been browsing similar Tindie listings as I have recently :)

https://www.tindie.com/products/obso/pdp-11-replica-kit-the-...


That's not a recent thing. I've had that replica here for years. I built it during the first lockdown (it only took a day anyway)

I've got their PDP-8 too. Really great kits.

I think the designer was working on a PDP-10 too. But got sidetracked.


The PiDP-10 will be available in the near future. I've also been working with him on the PiDP-1, which will hopefully be available soon as well.


It was recent for me! but fair point :)

Glad to hear the kit is great, helps me make up my mind!


It's great indeed. Extremely well designed, great instructions. Even comes with an alignment bar to make sure all the switches end up straight.

And the frontpanel is truly a work of art. The multilayer printing on the plexiglas in particular, but also the surround.

I soldered it all in a few hours but I'm very experienced. Even if you're new to soldering it'll be fine as long as you follow the instructions and take it slow. Double-check every switch type (there's several different types!) and position before you solder it. Make sure all the LEDs fully hit the board so they point the same way, the switches are aligned etc.

It's really an amazing kit. You can feel the love that went into it.

If there's anything I'd change I'd make it 1:1 scale. And the switches don't really feel like the real thing. I've played with a real PDP-11 and the switches really have that "someone paid 100 grand for this back in the day" feel to them. That satisfying affirmative thump. Which is very hard to replicate with off the shelf components of course.


I've got an old RT-11 manual from the Pioneer space program which would pair really well with one of these as a display item, so I'm pretty convinced, but had some hesitance because I too would love a 1:1 version.


I'm curious – is the Alpha simulation here as a proof of concept, or are its goals beyond that? I don't imagine a simulation of a complex architecture like this would live well in the same framework hosting, say, a PDP-8 simulation.


People ask about it from time to time -it seems to be more of a proof of concept than anything else. Patches welcome, of course.


Yeah, really old. No 8086.


Use https://86box.net/ There's a ton of hardware profiles embedded. I found my old XT PC which is not that common. Goes up to Pentium era.

It is not perfect tho. My XT has a Plantronics card which is a CGA+, with double memory granting few additional modes, but it's not emulated. There is some historical software targeting this graphics card, even some contemporary retro-indie games.


I love 86box to death, I really do. But it's apples to oranges.

Simh is an emulator platform -closer to MAME than it is to PCEM/86Box.

Unless I'm imagining things, there was an IBM pc emulator made for simh 5 or 6 years ago but eventually wound up being dropped -due to lack of interest? I recall it emulating just the 8086 and maybe a monochrome display?

Anyway, 86Box is a great emulator but still a different idea.


There is an Altair, though. z80 and 8080 if I remember correctly.


The altair simulator also simulates a 68k for cp/m-68


> In May 2022, the MIT License of SIMH version 4 on GitHub was unilaterally modified by a contributor to make it non-free, by adding a clause that revokes the right to use any subsequent revisions of the software containing their contributions if modifications that "influence the behaviour of the disk access activities" are made.

That seems weirdly specific. Looking at the actual LICENCE.txt file:

> Any use of this codebase that changes the code to influence the behavior of the disk access activities provided by sim_disk.c and scp.c is free to do that as long as anyone doing this is explicitly not licensed to any subsequent changes to any part of the codebase in the master branch of the git repository (https://github.com/simh/simh) made by Mark Pizzolato after the LICENSE.txt was added to the master branch of repository. Changes that qualify for this restriction at least include: changing the behavior or default of SET AUTOSIZE/NOAUTOSIZE, or any code in scp.c and sim_disk.c or any simulator components that use the sim_disk routines.

I'm guessing this Mark Pizzolato made some changes to the implementation of "disk access activities" that he deemed were important but which other people found controversial, and he wanted to make sure they stayed?


It seems Mark Pizzolato was at the time the primary maintainer and recent contributor (by a large margin) (https://groups.io/g/simh/message/1501).

Mark made a change to how the virtual disk system worked that broke an esoteric use case (using dd to image multiple disk images to an SD, without an actual file system). The user didn't want to use a workaround (specifying the transfer size in the dd command), and proceeded to use twitter to get people to harass Mark. (https://www.bentasker.co.uk/posts/blog/the-internet/toxicity...)

Mark got sick of the harrasment and decided to take his toys and go home.

Toxic user prompts toxic reaction by developer.


I'm sure glad I don't work with developers like that, because that complaint makes perfectly reasonable sense to me and I certainly wouldn't consider it "toxic". Expecting a raw disk image file to be the exact size of the virtual device is not at all an "esoteric use case".

Being a prolific contributor doesn't mean you get to force your mistakes on others and cry about it when your decisions come back to bite you. See what Big Tech is doing to us for a great example of what this "feelings-driven design" crap has given us.

Words to live by: "if everyone tells you that you're wrong... maybe you are."


It sounds like there was a config option available to disable the signature addition to the image file ( https://github.com/simh/simh/issues/1059#issuecomment-108689... ). I could see a benefits for having an embedded image signature for preservation and corruption detection issues.

I don't think complaining about the design is toxic, but recruiting uninvolved people on twitter, and harassing out of band certainly is. Also reading the bug thread it seems the person with the issue wasn't the same as the one who instigated the harassment. (https://github.com/simh/simh/issues/1059#issuecomment-108675...)


Enabling it by default could break lots of valuable data over the history of computing. That's why I prefer simh-classic.


Not really; the signature is appended to the tail and can be safely ignored by things that don't know about it. This is a pretty thoughtful way to make the change.

It only really matters if you are doing something weird -- e.g. concatenating a lot of images together.

Having metadata instead of raw images is useful.

The thing was turned on for a couple years before anyone complained IIRC: and then suddenly it was a high-stakes issue with brigading, etc.


NEVER,

EVER

mangle your raw images, specially if you want to dd/tar them back to the original machine. Also, as a golden rule on IT, specially on emulation, NEVER do unexpected changes to the original media, be disk images, CD-roms, BIOS dumps or any other firmware.

If you want to make changes for interop, do it in RAM as a volatile change or write the metadata anything else on the emulator config directory.

The mentiones change is not useful, it really sucks. The move was one of the words turds ever done to historical computing and data preservation. Period.

The purpose of emulation is to run and preserve the original systems -unchanged- by design. The user is the one who commits the changes while he is running the emulated system from the inside, and not the emulator.

The goal is to preserve the data back and forth, not to ruin it.


> mangle your raw images, specially if you want to dd/tar them back to the original machine

dd back to the original media is no problem with stuff post-pended; dd will complain that there's an extra block at the end, but other than that, all's good.

> The goal is to preserve the data back and forth, not to ruin it.

A big part of the point of this change is to prevent you from attaching devices "wrong" in ways that are likely to cause the emulated environment to corrupt them. You can't do that in RAM, and it's better if it is in the same file.


>You can't do that in RAM, and it's better if it is in the same file.

No, that's a setting for the config file, or per machine by default in the settings, but never in the media itself.

The old media should be pristine. If the user mangles some media because it attached some cd image as an RP disk under a Vax, it's his issue, is not an issue of the emulator.


It's not "old media". It's a disk image in some format.

Before it was just a raw disk image.

Now it's a container format for a disk image--- that happens to generally work as a raw disk image, too.

Either one keeps the original data pristine.


No, you are wrong. Emulators should never touch the original data format save up for converting it at will, as qemu can do by using qemu-img.


That's a belief you have, but it isn't something that one can be "wrong" about.


That "esoteric usecase" was one reason people opposed that change. There are other concerns as well (you can find them if you read the discussion in the GitHub issue linked elsewhere in the comments).

For example, it made SimH alter disk images by itself, without warning, even if you only wanted to use them in a read-only manner.

There was also concern that sticking metadata right into the data itself was bad from a preservation standpoint.

Also there was the point that such things should be opt-in instead of just appearing ferom one version to another, without warning.

All of these concerns were dismissed as unimportant or flat out ignored by Mark. Then he went full overboard with his license change (which made a free project non-free)

Edit: removed incorrect reference to the GPL


> you're not allowed to add additional usage restrictions to GPL software

SIMH is not GPL'd software.


yeah, you're right, of course. What I meant is that it made a free license non-free and that got people concerned


Maybe they shouldn't have harassed him -- sounds like he was doing it on his own time.


Presumably still is -he continues to be involved in both projects (judging by the commit histories).


> an esoteric use case (using dd to image multiple disk images to an SD, without an actual file system).

fwiw this is an _extremely common_ use case. it's literally one of the tasks that dd was created to do


Calling the use case esoteric is flatly wrong. It's literally how many disk images across decades of software work. It's how things like vnd(4) work.

Nobody would've cared if it 1) wasn't turned on by default, and 2) didn't destructively change disk images.

It's a bit extreme to call people toxic because they got upset that a deliberate change caused data corruption.


I don't think getting upset is toxic. But when you don't get the response you want from a small open source project, your next steps shouldn't be to encourage more people to bother the developer(s); that's the toxic behavior IMHO.


Agree. I see the original post was making a distinction between people who were upset and the person who tried to encourage harassment. That person definitely was being toxic, so please disregard my comment about calling people toxic because I misread and agree with the label


Nobody cared for a couple of years until one person started recruiting lots of people to bug the developer.

It doesn't destructively change disk images; it appends something to the end. Other usages of disk images are fine: it's only if you want to use the disk images without a filesystem that you run into trouble.

Metadata is useful. Metadata is a whole lot less useful if it's off-by-default.


This is decidedly not the case, and it attempts to misattribute the issue.

I cared because it happened to me. Like most people, I use a package management system (pkgsrc), so I wasn't on the simh mailing list and I didn't know about this change. I update simh via pkgsrc, tried to restart some virtual machines and had fscks corrupt the filesystems of those VMs.

I went looking in the mailing lists, saw the discussions, saw how people asked nicely for him to do something more reasonable (the most reasonable would be to not default the feature to on), and saw how he simply doubled down and became dismissive.

Note that I don't advocate for harassing people at all. That's a separate issue, and I condemn anyone who'd encourage harassment.

"Metadata is a whole lot less useful if it's off-by-default." You're making excuses for him. We shouldn't force something on others because of specious suggestions that it might be useful in the future, particularly when it's actively harmful right now.


How does fsck corrupt filesystems when the metadata is something that's post-pended past the end of the image?

The reason why the original user (al20878) in the issue had a problem was because they were storing the images without a filesystem and expected them to have never changed size when they copied the image to the SD card. This isn't a typical use case.

Can you explain how you ended up with corruption based on a small thing being postpended to a raw disk image?


I honestly don't know with certainty. I explicitly set drive types with "SET RA* TYPE=" (and RQ*) and image sizes with RAUSER, and suspect that the metadata got written to after the drive type's size offset in to the image.

I noticed several other bugs: one of my disk images has the metadata block at the beginning of the image (no idea how). One of my systems (a big endian system - imagine Mark's shock and surprise when he learns that people who run less common systems in simh sometimes run... less common systems!) had appended an extra metadata block onto each image every time simh was relaunched.

Once I realized that there were these blocks of extraneous data here and there, I started looking online to see if others had similar experiences, and came across the threads talking about this new "feature".

And I disagree that writing disk images to disks / partitions without explicitly giving dd the size isn't a typical use case. I can't remember the last time I specified the size. The size of the file is always assumed to be the size of the write, and it'd be a very special edge case to deliver a file, then tell people to only write a specific size. Go download any .iso or .img file for installing any OS out there, and show me on the download page where we're told to stop after a certain byte (block) count if you disagree.


> I started looking online to see if others had similar experiences, and came across the threads talking about this new "feature".

OK, then it sounds like you have a legitimate bug: a far clearer report than anyone else has provided here. Perhaps you should communicate it.

> And I disagree that writing disk images to disks / partitions without explicitly giving dd the size isn't a typical use case. '

I didn't say that. I just said (in other comments) that if you overrun the size of the device or partition, the kernel reports an error and then dd whines that it couldn't write the last bit.

  $ sudo dd if=/dev/urandom bs=512 of=/dev/fd0
  dd: error writing '/dev/fd0': No space left on device
  2881+0 records in
  2880+0 records out
  1474560 bytes (1.5 MB, 1.4 MiB) copied
The only person in the issue with data loss is the reporter, who was dding image files onto offsets on an unformatted SD card. When one of them got longer, it created problems-- overwriting the first bit of the next image. If you're doing something like that, you really ought to be providing the size.


It just resulted in the open and official fork - OpenSIMH[1].

[1] https://opensimh.org/

[2] https://github.com/open-simh/simh


I didn't know this project existed, but that controversy on the license made me curious, and I have had a look at the repository. I stumbled upon this discussion, which shows another controversial decision by the new maintainer regarding the way disk files are managed by the emulator:

https://github.com/simh/simh/issues/1059


He’s not “the new maintainer,” he’s “one maintainer” but he decided he knew better than everyone else. Hence why the official SIMH repository is at open-simh.


He's the overwhelming majority of contributions to the codebase for the last decade.

What he did was toxic, but in fairness, there was out-of-band harassment coordinated towards him for nearly a year.


There wasn’t “out-of-band harassment,” there were people calling out his toxicity and unilateral negative changes that he kept doubling down on. And no matter how many contributions he’s made, it’s always been Supnik’s project.


When you rally people over Twitter to come to an issue on an open-source project, it's A) out of band, and B) being a jerk.

> And no matter how many contributions he’s made, it’s always been Supnik’s project.

Basically the overwhelming majority of all work for the last 15 years has been him.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: