Hacker News new | past | comments | ask | show | jobs | submit login
Technician keeps computer made in 1959 still humming along (asahi.com)
284 points by turrini on July 30, 2019 | hide | past | favorite | 109 comments



While not even close to what this guy does,I do run my IBM thinkpad every now and then,which is probably about 20 years old.I reinstalled Linux recently and it just works..The keyboard is still lightyears better than most of that crap one could find in most of today's laptops.. What the guy does with the computer is amazing and I'm really glad the management allow him to carry on with it.


My first employer still has a Win 95 machine running for some very specific EE CAD software, for no other reason that it still does the job, the company that made it closed shop years ago, and he's very productive with it.

I suspect it's gonna be there 'til the day he closes shop, or he runs out of parts for the computer.


I did use some very interesting software over the years and I can absolutely relate to this. For instance,I used to run Alias Studio,which wasn't even part of Autodesk at the time,which is obviously a bit obsolete by now but I'm pretty sure that there are still some shops out there running it on old machines,as the software is really powerful,not to mention the licensing costs back in the day( I think it used to be $100K per desktop)...


Would a VM not work for this?


I fire up VMs for windows-only [mostly graphical] applications from time to time and the experience is never very good. No matter how beefy my host system is, the experience is awful and I end up giving up out of frustration. I have an old used Toshiba laptop with Windows that I fire up on those rare occasions now.


GPU passthrough can give you excellent performance, at the cost of some convenience (depending on how your setup is.)

Somewhat recently, Intel released GVT-g mode, which effectively allows you to carve up an Intel graphics adapter into chunks that can be used by VMs. I tried it on my laptop and it indeed works as advertised. https://01.org/igvt-g

The traditional method, though, is just having a secondary GPU with VT-d passthrough. This will give you the raw performance of a dedicated GPU, and you can either connect it to an external display for no latency, or you can use something like LookingGlass to get a more traditional setup (SPICE input + KVMFR video.)

I used GPU passthrough for a while, only stopping because it occurred to me that modern Wine and Steam Proton had largely closed the gap for me in needing virtualization for graphical applications. libvirt + Qemu/KVM + virt-manager with Windows guest SPICE agent installed is my weapon of choice these days. Without a GPU passthrough configuration, dragging windows is a little laggy, and I can't imagine graphical programs will work well since there is no 3D acceleration, but the performance with a lot of other tools (like Visual Studio) seems fairly acceptable. I will note that they should really make an easier auto-install for the SPICE drivers and agent, because it was tricky enough that it took me a while to realize I didn't actually have the right agent installed.

VirtIO block and network devices also can offer some I/O benefit, though you will want to use the Load Drivers mechanism to load the viostor driver prior to installation; it's a little tricky to switch to VirtIO storage later, since the driver obviously needs to be available during early boot. You probably also want to avoid qcow2 storage, instead opting for raw image or even disk passthrough for better I/O performance.


Thank you for this wealth of information.

I love the idea of a 2nd GPU, and I have a couple laying around that it's definitely worth trying. I already have three screens on my desk... what's one more?

I'm thoroughly impressed with the Wine project, and yet I'm generally hesitant to install it on my system. I think it may just be the massive install list I get when I try to `apt install` it. I've a similar aversion to installing KDE apps on a system that's not already running kde, which generally include about 150 other k* libs.

Steam Proton looks great. I'd played a few games once Steam started releasing them on linux and truly enjoyed how well some worked (and some _really_ didn't). I don't play as much lately, but this may change that.

It's going to take me some time to read about and further understand everything else you've posted here, which I appreciate.


Wine does tend to clutter the file associations a bit :( It’s probably possible to work around that but understandable that you don’t want to.

Steam Proton is great since it doesn’t do that and is mostly automatic. I was surprised to see Direct3D 11 titles running just fine at 60 FPS and maxed out settings, very cool.

And yeah, GPU passthrough is kind of the holy grail for desktop VMs. It’s definitely a little tricky, but once you have a good setup going it’s hard to beat. There’s a lot of guides. NixOS really shines here because your entire configuration can be declaratively configured, making your setup easily reproducible and surfacing all of the configuration in one place, but it’s obviously not distro dependent in any way and will work fine on Debian or Ubuntu.

Some downsides:

- Not all machines can VT-d, some can but are not stable. This is better today than ever.

- Your VM GPU needs to be in its own IOMMU group. Modern motherboards tend to do this basically automatically it seems, though its not guaranteed and if not your only workaround is a patch that lets you “split” the groups in an unsafe way that breaks isolation.

- Audio is tricky. Looking Glass does not forward SPICE audio today, so most folks configure qemu to output directly to PulseAudio. If you are using a physical display this approach obviously still works. You can also attempt to forward a USB audio device, or use audio over HDMI/DisplayPort.

edit: I hopped on my desktop. Here's my former NixOS configuration for GPU passthrough, to give you an idea of what kinds of system level configuration is needed:

    # IOMMU configuration
    boot.kernelParams = [ "amd_iommu=on" "pcie_aspm=off" ];
    boot.kernelModules = [ "kvm-amd" "vfio_virqfd" "vfio_pci" "vfio_iommu_type1" "vfio" ];
    boot.extraModprobeConfig = ''
        options vfio-pci ids=10de:13c2,10de:0fbb
        options kvm ignore_msrs=1
    '';

    boot.postBootCommands = ''
        # Enable VFIO on secondary GPU
        for DEV in "0000:0a:00.0" "0000:0a:00.1"; do
            echo "vfio-pci" > /sys/bus/pci/devices/$DEV/driver_override
        done
        modprobe -i vfio-pci

        # Setup Looking Glass shared memory object
        touch /dev/shm/looking-glass
        chown john:kvm /dev/shm/looking-glass
        chmod 660 /dev/shm/looking-glass
    '';

    virtualisation = {
        libvirtd = {
            qemuOvmf = true;
            # To make the PULSE Qemu audio driver work.
            qemuVerbatimConfig = ''
                namespaces = []
                nographics_allow_host_audio = 1
                user = "john"
                group = "kvm"
            '';
        };
    };
The virtual machine itself also needs some configuration. It will need to use UEFI with a properly-equipped TianoCore image to initialize the GPU. Also, to set the audio driver to something other than SPICE. Finally you need to add the PCI devices, this can be done trivially through the UI as long as you know the PCI IDs of your card.


There's no way to do GPU passthrough on Windows, right? It seemed to be just a Linux thing last time I checked? And similarly with GVT?


This is correct, I am unaware of any way to bring GVT-g to Windows. It is probably not impossible but the groundwork isn’t there yet. I suspect Microsoft will have its answer in the form of GPU-P and SR-IOV, though it is currently unclear if that’ll work with consumer hardware or not.

Simple VT-d is supported in the form of DDA in Hyper-V - I have not attempted to use this for a GPU and I’m not sure what limitations it may have versus KVM’s PCI passthrough.


Cool, thanks!


I used virtual box running xp a few years ago for a vba project I did for a contract. I never ran into any issues.


XP is NT-based, so it's not surprising. They're saying for non-NT systems it's an issue.


Ah ok fair enough. I have to admit i haven't tried running windows 8 or 10 in a vm since the preview builds. I assumed any instability was from the lack of polish on the OS itself in its state at the time.


What VM software are you using? I find that virtual box is always awful but I haven't tried anything else.


I believe Windows 95 has trouble with CPUs that are too fast and needs to be patched to handle it.

I think I also read somewhere that older Windows (and some other operating systems) don't handle modern TCP/IP very well. A TCP session start with window size 9 and ECN bits, for example. And I believe there's more timing problems with networking as well, because no one believed in gigabit Ethernet back in the day.


Oh wow I see, okay thanks.


It would, with setup to replicate the environment.

It might well be a miserable buggy experience compared to just using the original hardware.


Windows 95 is very difficult to get working in a VM in a reasonable way, and this machine likely relies on custom ISA cards, which is likely impossible to virtualize.


Most VMs don't really work with Windows 9x which tends to crash even more than usual (NT based windows is fine though). The only one tool I know of that's running 9x relatively stable is PCem, and that's more of an emulator than a VM proper.


It's a lot easier to have better keyboard when space and weight is not a concern.


It's not like the parts have become bigger over the decades. With today's tech, you could easily make a laptop with a quality keyboard, with a form factor reminiscent of those old ones, packed full with modern hardware at fraction of the weight... that is, if manufacturers weren't optimizing for vanity metrics at the expense of utility.


Because people care less about how nice the keyboard is and more about how light it is which is a valid concern when you carry the thing around with you everywhere. The keyboard I have plugged in to my laptop weighs as much as the laptop itself.


Can only agree to certain extent.Lenovo ThinkPads retain exceptionally good keyboard design despite being wafer thin.I have a bulky Toshiba made in 2015 and the keyboard is just very so so. At work I've got XPS 13 and the keyboard is really nice,so ultimately its down to the industrial designers.


I have a late-2018 ThinkPad X1 Extreme, and while I love the feel of the keyboard, the keys around the mouse nub (GHBN) started becoming increasingly unresponsive after only a few months. Last week, the whole machine died horribly (unrelated), so I'm hoping to get the keyboard replaced as well when I send it in. I love the machine, and overall it's well-built, but I'm going to be wary of the keyboard. One of the main reasons I bought the ThinkPad was because I'd gotten fed up with MacBook keyboard failures.


Thanks to the self-serviceability of Thinkpads, replacing the keyboard is pretty easy. My Thinkpad gets a lot of use so I replace the keyboard about every 12 months. I do wish it were easier to order parts directly from Lenovo, but you can find them on Amazon and many other e-retailers.


The reason I ended up with XPS 13 instead of ThinkPad was appalling service levels from Lenovo.They are on the same level at least in the UK,as an average run down kebab kiosk. I don't think I've seen any other company that'd have 0.8 rate on trustpilot: https://uk.trustpilot.com/review/www.lenovo.co.uk Not sure how they remain the largest computer manufacturer on earth..


I called customer support on a Sunday evening, and quickly got connected to a US-based support person. They sent me a return box overnight. So far I'm happy with the quality of service. The real test will be what happens after I ship it in for repairs. No complaints yet, though.


On the other hand I think the focus on convertibles is a complete failure. We order a surface here and there but it isn't really something you would want to work with very long compared to a classic design. I also like exchangeable batteries without having to lose warranty. These are service parts ffs.

The market for laptops did shrink significantly due to phones, but I know a lot of people that still prefer to use a classic laptop for home use.

So while the market did shrink, the rest was almost entirely destroyed by bad design decisions.

I still don't know why people use devices like iPads, it is a complete mystery for me. They are as useful as tramp stamps.

I couldn't care less about weight or dimensions if that means I could get high quality parts. Not I don't get them and some models are still beyond 1000$...


Oh, those were concerns back then, too. They just weren't the only concerns, as they are now.


I have a Lenovo Thinkpad X1, and I'm really happy with the keyboard. Had a Macbook for several years, and almost always used it with an external keyboard because of the horrible built in one.


Reminds me of "A Commodore 64 Is Still Being Used to Run an Auto Shop in Poland (gizmodo.com)"[1] though I was not able to afford such a luxury when I was young.

Every decade or so I boot an old DEC Alpha. Last time I dist-upgraded Debian (woody to etch?) and everything worked - if a tad slow.

The power of nostalgia.

[1] https://news.ycombinator.com/item?id=12604414


Same; except it's a Dell Inspiron that's about 16-18 years old. It's outlasted a couple of my thinkpads but that's probably because I've taken those thinkpads places I would have never taken the Inspiron.


Speaking of, I'm trying to get my hands on a T400s Thinkpad, but not having much luck finding these units.

I plan on flashing it with libreboot and want to give guixsd a try as a daily driver.


I have a working amiga 500. Must be over 25 years old now, nearer 30.


It's a relay-based computer. Basically as long as relays continue to be made (which is probably close to forever), it can still be serviced. Somewhat newer but still old computers using custom ICs that stopped being made long ago would not have as good a chance of survival, and vacuum tubes are also relatively rare. But relays are still being made, and people still build relay computers for fun, as a quick search of the Internet will show; this is one of the more memorable ones: http://web.cecs.pdx.edu/~harry/Relay/

There's a similar analogy to this in the automotive world, where early cars were simple and one could easily service and maintain them, even fabricating new parts as necessary; at the same time, they're nowhere near as efficient as modern equivalents.


It's a relay-based computer. Basically as long as relays continue to be made (which is probably close to forever), it can still be serviced.

Yes. And even then, the relays can be repaired. The NYC subway system has a shop that refurbishes the relays of their signaling system.

I restore old Teletype machines. The ones from the 1920s and 1930s are the easiest to restore. They're mostly all steel and cast iron. Once you clean, oil, and adjust them, they usually work, unless the machine was seriously damaged. I have five running Teletype machines.

The post-WWII machines are harder, but still repairable. The ones from the 1960s and 1970s are really hard, and some of the plastic parts have to be fabricated. The last Teletype, the Teletype Model 40 (1979-1984) appears to be hopeless. The print chain deteriorates over time, and making a new print chain, with all the little character slugs, would be a huge job.

Restoring current electronics, with SOIC chips, will be hopeless after the ICs wear out.


I helped a friend-of-a-friend debug some control circuits on an ancient drill press. I wondered "why not throw a microcontroller and some SSRs in here instead of dealing with all this goofy relay logic?" Then I realized that I was servicing a 50 year old piece of equipment, which would continue to be serviceable long after Microchip stops making ATMega328s.


I wonder how much time the Living Computer Museum has left. That was a Paul Allen thing, keeping old computers alive. Unless they find a new sugar daddy, that will probably decline.

Museums that keep old machines working need a large operating budget. The Smithsonian used to keep all their clocks wound and running, their piece of the ENIAC was powered on and counted up, and the Atlas Missile Guidance Computer was run regularly. None of those run any more.


I think they do pretty well, plus it's not like Paul Allen's wealth died with him. That is one of my favorite places ever though so I really hope it sticks around.


Maybe with good emulation and VR we can preserve them digitally.


In some sense perhaps but part of it probably is the maintenance of the physical computers.


emulation doesn't emulate the feeling you get when you're face to face with the real thing, but better than nothing at all I suppose.


I suspect they are fully funded.


If you ever find yourself in Mountain View on a Saturday around 11AM, I highly recommend taking in the IBM 1401 demo that happens at the Computer History Museum: singlehandedly one of the coolest things I’ve experienced. Having the opportunity to not only see the machines in action, but to speak with the engineers who originally used them in their work was such a treat.


On a similar note for anyone who might find themselves in the UK the "National Museum of Computing" in the grounds of Bletchley Park is also a great visit https://www.tnmoc.org/ , with the guided tour especially.


They also have the PDP-1 lab next door which they run on the first and third Saturdays of the month.

Last visit I was able to play the original "SpaceWar!" on it and the demo was run by Steve Russell and Peter Samson. Great to see pivotal computer history that directly impacted my life as described by people directly involved in making that history.


Its one of the reasons I wrote a port of my database for my IMSAI 8080 [1]. You only think you know a machine until you try to use it. From experience I can say emulators are just not faithful to the actual experience of the physical hardware. They are too fast and too perfect.

The other nice thing about these old machines is you can understand the whole thing. You get a better idea of what is fundamental and what is just fashion.

[1] https://github.com/JohnSully/KeyDB_Z80


> They are too fast and too perfect.

IDK. The android emulators I run on my Mac sure don't seem perfect. For one thing, scrolling a RecyclerView up and down using the mouse doesn't always work right. Gesture detection seems to lag or miss events sometimes. Working with a physical android device is often more stable and "solid" IME.

OTOH android emulators do seem pretty fast.


At this vintage of machine we're talking about things like pulling individual RAM chips which have gone bad, and hardware that is just too slow to even keep up with 9600 baud datastreams - despite the fact you wrote it all in assembly and hand optimized it.

Not to mention the fun of playing with front panel switches. I built an emulator for one long before I acquired my IMSAI. They did not behave at all how I expected. Even more interesting the front panels are not synchronous on these "hobby" machines from the 70s but instead are timed by RC circuits that are supposed to be "close enough" to the CPUs clock.

Not all of these problems are from age (although that has an effect), the hardware was pretty flaky even when new.


> Even more interesting the front panels are not synchronous on these "hobby" machines from the 70s but instead are timed by RC circuits that are supposed to be "close enough" to the CPUs clock.

That is by far one of the most curious things I've ever read on Hacker News.


A friend of mine used to get paid a very senior engineer salary to maintain an old VAX system. The system ran a trading algorithm that made about $1M a day, so they figured it was worth keeping him around to do almost nothing at all, other than make sure that thing ran during trading hours.


Fun implication being there exists a trading strategy that consistently make $1M a day for 40+ years (I guess what isn't mentioned is how much capital was backing up that strategy)?


It could be commissioned based. X cents per share processed. Lots of custom algos that are trusted by traders.


I don’t remember numbers but it wasn’t a crazy high return. Just a lot of capital being moved.


On a VAX no less! Can't imagine there's an HFT arbitrage opportunity for such "slow" tech.


What are the odds the strategy only works because it runs on a VAX?


Close to zero.


One interesting case is the Pinball game that came with Windows. IIRC, it relied on 32-bit x86 features that failed on 64-bit machines.


Or the code is such a mess that nobody understands it will enough to port or replicate?


VAX systems are still around and openVMS is used to port old software and keep it running.

Can confirm, I had a job that was essentially get legacy code working on new systems. There were guys there that had been around since the inception of the thing and kept things running smoothly. The downside is they were always opposed to any sort of refactoring or rewrite, even if that made sense.


Vax emulation would probably get them out of the hole. Was it VMS or OpenVMS because I cannot conceive of a UNIX program which could not be ported out. If they wrote to the VMS structured FS semantics (its a typed-record index file model direct in the filesystem layer) then I could believe there were corner cases they did not want to explore.


Ticketmaster is a successful example of the emulation strategy.

https://news.ycombinator.com/item?id=9808995


Great example. I'm told a lot of banking FinTech is emulating ICL George OS inside, for the core registry function which passed audit, so re-implementing risks re-audit which nobody wants to wear. I know from a friend some banks here in OZ pay COBOL programmers to not die, and keep the bits they haven't ported to Java alive across some API I don't want to even dream about. Probably an ethernet emulating an IBM370 protocol which then winds up in a soft-device into some COBOL tape or disk input strategy


Would be interesting to hear more about this. Considering it was only an algorithm,was there no way to migrate it?


I bet the only complete specification of the algorithm was the program running on that one machina, and that it was written in some ancient VAX assembly.

Also, $1M is a lot of money to risk subtle changes to the logic of a 40-year old program. Main reason why big financial orgs are still running software from the 50's on mainframes, that.


If it ain't broke, don't fix it.


I have seen first hand that some algorithms can be quite sophisticated and even having the source code (without documentation) isn't nearly enough when the binaries that support the mathematics only run on an archaic piece of hardware.


“If the computer does not work, it will become a mere ornament,”

I own some old kit that I like to keep working mainly some Amiga and Atari ST machines and he's 100% right. Unless people see what they are doing they literally assume they are some sort of keyboard.


I recall being told that when I worked for British Telecom they caught some cleaners stealing just the what they where thought where computers but where in fact just the key boards :-)

Probably in Manchester as that was a bit of a thing up there -Later on I helped refit an office for the third time after they had been cleaned out twice.


We have some cleaners come in and they always constantly rearrange the keyboard, desk monitors etc. Which normally causes a few groans in the morning as everything has been moved. One guy was having a moan about it and I said "You know the women that clean this probably have never used a computer other than maybe their phone?".


Yikes. Not the kind of thing I'd go repeating there, friend. Your entitlement is showing. Your assumptions: - The cleaners must be women - The cleaners must have a quality of life so low they've never used a desktop - The cleaners aren't you and are therefore ignorant in some way

Plus, if they really do meet that second qualification, disparaging them for it only makes you look even worse.

I would encourage you to seek out somebody who does meet those qualifications. You might learn something.


No not at all. I have seen the cleaners and have spoken to them and they are clearly women. One of them told me that she has no idea what we do, what the computers are used for and has never used a desktop pc. I doubt she is unique among her peers.

Quite a lot of people that do "Blue Collar" jobs that are in their 50s and 60s probably don't have to interact with a computer that often (I know my father doesn't). If they do it is normally through a kiosk of some sort or they are using their phone.

I was actually adjusting my coworkers expectations. He is younger than me and the thought hadn't entered his mind that someone may have never used a modern computer.

So in future before you start spouting about people being entitled, maybe you should not make a bunch of assumptions that is based on your incorrect reading of a comment.


> and then I said something unkind about someone I don't even know.

I personally try to thank our office's cleaning lady. She does a lot of work, and everyone appreciates it.


I did no such thing! I have personally spoken to them and they have told me they have no idea how to use the desktop machines.

I was simply trying to explain to my colleague who has known a world with modern computers being prevalent that there are still many people that have never used them.


For my masters project in applied nuclear physics in 2013-2014, I was running the Finnish software Sampo 91 (from 1991) for analyzing gamma spectroscopy lines produced in our Germanium detector. I could get it to work on Windows 3.1 with some backwards compatibility features. It didn't run on a VM but on a physical Intel 486 machine the hospital where I was working had laying kept in store. It was a pretty nostalgic experience to set it up since the 486 was my family's first computer. I learned two things then: a lot of the basic features that we expect from an operating system already existed back then, and using old software is a lot less stressful than modern since computer's only did one thing at a time back then.


That computer is older than him but he maintains it. I don't know, in a world where dev fads arise and die by years it's a little inspiration to see someone who maintains a multi-decade running system.



My guess is that this computer is no longer in service, or soon won't be (we passed a school millage for building upgrades not too long ago).


We've got a computer in the lab from the 70's that we use to upload firmware for some legacy chips still in service on aircraft (a cabin lighting system, not anything vital to the aircraft staying in the air!). I dread the day that it breaks because it's all bespoke software and hardware. We tried to back it up to another device but it's simply not compatible with anything except that computer.

When it finally gives up the ghost it's likely going to cost thousands to design a new system with modern hardware, which doesn't even include the cost of writing up all the new safety documentation and grounding the aircraft for a refit! To add to the headache, it's all written in BASIC so the only ones who knew how to do anything with it have long retired or passed away. I don't look forward to the day I have to reverse engineer that bad boy.


I apologise if I've failed a reading comprehension check, but did the article say what the computer was used for?


"The FACOM128B greatly contributed to technological advancements during the period of Japan’s high economic growth from the early 1960s to the early 1970s. The computer was used in the design of camera lenses and the YS-11 plane, Japan’s first passenger aircraft developed after the end of World War II."


I suspect that parent wonders what it does now. Which is likely ~nothing. But I am curious how it compares with current machines. Comparable to an i5 desktop? Or to a smartphone?


Here's a comparison between a RPi Zero and a 1957 Elliott 405 (from the UK): https://www.spinellis.gr/blog/20151129/

On a simple clock comparison, it was about a million times slower than a Raspberry Pi Zero. Computing has come a LONG way


Probably roughly equivalent to the chip equipping your microwave, if it's not too recent. Except that your microwave chip is most probably faster and more reliable.


With 13K of memory[1] I doubt you could compare it to anything modern bigger than the simplest e-toy...

1: http://museum.ipsj.or.jp/en/heritage/facom128b.html


True, but some of those old machines could do amazing stuff with ~huge datasets. Basically by just doing one ~simple operation per data pass. Read, do something simple, write. Then repeat that as many times as needed, with a different operation. Sort of like executing a program one line at a time.


They did crazy tricks because timesharing wasn't invented and whatever program was running had the full machine to itself. Also, designers of these computers usually built everything around the flow of data so that a CPU would have as little as possible to do to read tapes or cards.


Right. And there wasn't much "full machine" to have.


It's about an Arduino with a 10x slower clock.


An Arduino Uno runs at 16MHz, and averages about 1.5 instructions per clock (estimated by multiplying the best case 1 instruction per clock by 1.5), so about 11M instructions per second.

The FACOM128B is asynchronous, and takes about 0.15s per instruction using the same estimation (1.5x best case), so about 7 instructions per second. See timings at: http://museum.ipsj.or.jp/en/computer/dawn/0012.html

However, the FACOM128B is 69bit, and the Atmega is only 8bit (it can do some 16bit operations, but those are limited to certain registers and operations, so I'll just assume it's only 8bit). 69 is about 9 times bigger than 8, so I'll assume the FACOM128B can do about 9 times as much work per operation (there is overhead in calculating wider results from narrow operations, but the full width isn't always needed, so this simple estimate seems reasonable to me).

Therefore the FACOM128B is like an Arduino with about a 190000x slower clock.


Nice estimate. And not surprising, given that it uses relays. Not even vacuum tubes.


My god. That machine is beautiful.


Recently, I used a working typewriter. It was a very refreshing experience. It has space, tab and backspace that physically move the paper back and forth. It has shift and caps lock which physically shift the head to use the upper part of the font.

The keystroke was too heavy and couldn't accept fact consecutive typing because of head jamming.

Now what is remaining for me is to use the working VT100 terminal and teletype.


I learned typing on a typewriter during my college summer holidays because the general advise was that if I end-up being a total academic failure, I would be able to earn my bread by typing legal documents at the city court. I thought it was a useless idea but learned it anyway (in two languages, no less).

As it happens, this came handy throughout my career (before and after I was a software developer). No matter what you do, you are much more productive on a computer, if you can type without looking at the keyboard at 70-80 wpm (not bragging because in the world of good touch typists, that is a poor score).


Speaking of humming along, if you visit that site with https, they refresh to http!

https://www.asahi.com/sp/ajw/articles/AJ201907280007.html


A little bit off topic,but after reading this post,I texted my mum to check if they still have this: https://lady-eklipse.livejournal.com/4042.html It's photo no.6 in the article.The thing was made in 1979..Even when I was a kid (I'm 34), my friends were amazed that I had something like this. She said she'll check if it's still somewhere in the attic!:)


It's super rare, hold onto it!


Having watched and listened to the video, the phrase "still humming along" is definitely a misnomer.


Still clicking along?


Indeed. I am not sure it can even increment the program counter without clicking.

Humming would be hitting a HALT and waiting for the operator to press a button.


In operation it sounds a bit like a pachinko parlor. Imagine if pachinko machines performed a useful computation, allowing the parlors to be server farms powered by gamblers.


I seem to be the winner of the oldest-computer-still-in-everyday-use competition here. I have programmable TV-remote based on RCA COSMAC from 1976.


Is there any translation for the video available?


Relay based, more reliable and needing less replacement than vacuum tubes I assume, and nice clicking sound too :)


> Relay based, more reliable and needing less replacement than vacuum tubes I assume,

Kinda sorta. Relays are mechanical devices that wear every time they're toggled. Every time you toggle a 1 to a 0 or vice versa, you risk destroying a relay.

Vacuum tubes don't have moving parts, so the wear is significantly reduced. However, when they're on, they're very hot, so their wear is related to power cycling. Every time you power the machine or power the machine off, you risk destroying a tube. Colossus was kept in a powered on state 24/7, even when not in use, despite using an ungodly amount of power. It was a measurable proportion of all of the electricity used on all of Britain during WWII.

So if you never run any programs on a relay computer and keep it in a low humidity basement, it will last indefinitely. But if you use it to do stuff, you'll be diagnosing and replacing failed relays routinely. If the computer has a clockrate of 10Hz and uses reed relays (the long life kind) with a lifetime of 10 million cycles, you can expect any individual relay to last 2,000,000 seconds of continuous use. Which is about 23 days. Multiply that by 1,000 relays and you've just figured out why it takes a full time employee just to keep the machine running. Imagine a RAID array using drives with a MTBF of 23 days. Eww.


Ouch!


Amazing. It looks like something from Dr. No.


A Mechanical Marvel




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: