Hacker News new | past | comments | ask | show | jobs | submit login
A vacuum tube ROM? (tubetime.us)
131 points by sbierwagen on June 4, 2018 | hide | past | favorite | 51 comments



Very cool! The more I read about early computing, the more fascinated I become that the major problem innovators needed to solve was memory, not computing per-se. There were precious few things that could be teased into working as read-write memory at electronic computer speeds before semiconductor memory, and they were all monstrously expensive (vacuum tube flip flops, Williams tubes, core memory) or nasty and dangerous (mercury delay lines). Core, despite its expense, was a major breakthrough in that it was solid state and therefore much less error prone.

I can't find the reference now, but I remember reading about some early computer engineers (possibly on Whirlind?) contemplating the use of a microwave retransmitting station as a form of memory, essentially creating a "mercury" delay line with microwaves in the atmosphere.

This is how desperate people were for anything even remotely affordable which could be tortured to behave somewhat like memory! No wonder Intel made a bundle when they started offering chip memory. The stuff it was replacing was just totally inadequate for the purposes many people wanted to put it to.


The more I read about early computing, the more fascinated I become that the major problem innovators needed to solve was memory, not computing per-se.

Exactly. Electronic arithmetic hardware predates WWII. IBM had an electronic multiplier working. ENIAC was a giant plugboard machine. It's not that people didn't think of stored-program computers before Turing. It's that there was nothing in which to store the program.

IBM built machines with plugboard memory. Relay memory. Electromechanical memory. Punched-card memory. Punched tape memory. Drum memory. Look at the history of the IBM 600 series machines, a long battle to get work done cost-effectively with very limited memory.

Delay line memory was sequential and slow. Willams tubes were insanely expensive per bit. Core memory was a million dollars a megabyte until the early 1970s and didn't get much cheaper. There was plated wire memory, thin film memory, and various ways to build manually updated ROMs. All expensive.

Then came semiconductor IC memory (1024 bits in one package!) and things started to move.


There are some old giant calculators Friden EC-132s?) that propogated torsion waves down coils of wire for a delay line.

This is way out of my knowledge domain, so I'm curious why mercury delay loops were used in the first place.


"Mercury was used because the acoustic impedance of mercury is almost exactly the same as that of the piezoelectric quartz crystals; this minimized the energy loss and the echoes when the signal was transmitted from crystal to medium and back again. The high speed of sound in mercury (1450 m/s) meant that the time needed to wait for a pulse to arrive at the receiving end was less than it would have been with a slower medium, such as air (343.2 m/s), but it also meant that the total number of pulses that could be stored in any reasonably sized column of mercury was limited. Other technical drawbacks of mercury included its weight, its cost, and its toxicity. Moreover, to get the acoustic impedances to match as closely as possible, the mercury had to be kept at a constant temperature. The system heated the mercury to a uniform above-room temperature setting of 40 °C (104 °F), which made servicing the tubes hot and uncomfortable work. (Alan Turing proposed the use of gin as an ultrasonic delay medium, claiming that it had the necessary acoustic properties.)"

https://en.m.wikipedia.org/wiki/Delay_line_memory


Also, they're not "loops" in the physical sense, though they do operate logically as loops: pulses go in one end of the tube, travel through the mercury, and are received at the other end, where they are repeated elecrtonically back to the transducer at the starting end. "Reading" a bit means waiting for the moment when it is hitting the pickup end, and pulling it out of the loop at the same time as repeating to the transmitting end. "Writing" means waiting for the same moment in the "loop" and replacing the output of the pickup transducer with a signal that represents the data you wish to write.

Thus, there is an inherent tension between making the tubes longer (more storage per tube) versus keeping them short (lower access times). This tension is inherent to all forms of "delay" memory, including the wire torsion memory grandparent mentions.


Cathode ray tubes were also used as RAM in early computers:

https://en.wikipedia.org/wiki/Williams_tube

One of these computers was the machine at Manchester that Alan Turing worked on:

https://en.wikipedia.org/wiki/Manchester_Baby


They are mentioned in Turing's Cathedral [0], an excellent book that was recommended here on HN. Here's an interesting quote:

There were two sources of noise: external noise, from stray electromagnetic fields; and internal noise, caused by leakage of electrons when reading from or writing to adjacent spots. External noise could, for the most part, be shielded against, and internal noise was controlled by monitoring the "read-around ratio" of individual tubes and trying to avoid running codes that revisited adjacent memory locations too frequently -- an unwelcome complication to programmers at the time. The Williams tubes were a lot like Julian Bigelow's old Austin. "They worked, but they were the devil to keep working" Bigelow said.

This phenomenon is very similar to the recently discovered row hammer vulnerability of DRAM memory, except it predates it by roughly 65 years.

[0] https://books.google.co.uk/books?id=dMK3P6B3WgcC

[1] https://en.wikipedia.org/wiki/Row_hammer


That's a wonderful story!

The biography Alan Turing: The Enigma (which I highly recommend) also goes into a lot of detail about the early computers that Turing worked on:

https://www.amazon.com/dp/069116472X

I learned about Turing Machines as a CS student a long time ago, and got the false impression that he was only involved in theoretical pursuits (theory of computation, algorithms for cryptography, etc.). It was only after reading this book that I learned how much he had contributed to the design of actual computing hardware.

> trying to avoid running codes that revisited adjacent memory locations too frequently

And today, it's exactly the opposite: we try to write code that has as much locality of reference as possible so that we can avoid expensive cache misses.


There's one of those* on display in the CS building at the University of Manchester, it's the size of a forearm and IIRC stored 1 byte :)

* A CRT RAM thing. It's a refreshing RAM that takes advantage of the persistence of the the phosphor glow.


"each Williams tube could typically store about 1024 to 2560 bits of data"

From:

https://en.wikipedia.org/wiki/Williams_tube


Williams tubes don't rely on the phosphar coating (and some don't have a phosphar coating at all). It's all about the static electricity on the tube.


Oh yeah, that's true. Thanks for correction.



Can anyone explain why the letters and characters in the grid are ordered they way they are?

A-Z is non sequential, I’m guessing it’s something to do with making character selection logic simpler, but by looking at the letters I couldn’t come up with a definite pattern or rule.


It's some variation of BCD code, which was inherited from punched cards and still lives on in some IBM mainframes.[1]

[1] https://en.wikipedia.org/wiki/BCD_(character_encoding)#48-ch...


Almost certainly EBCDIC: http://www.astrodigital.org/digital/ebcdic.html

EBCDIC has discontinuities in the same places, I-J and R-S.


Very cool, especially the "SEM with a fixed target" analogy. I've often wondered how the old SAGE terminals handled character generation, and this post fills in the blanks nicely.


Ok... Now I feel the sudden urge to build a working terminal with a Charactron.

Why can't I have simple hobbies.


I'm usually too scared to work with anything over 120V- especially since parts on the schematic linked require a 1000V power supply to work. Very neat, but too dangerous for the average home-gamer like myself to be inspired enough to go out and build.


And you should be.

It can be worked safely if you follow necessary precautions, but unless you are familiar with those, better follow the disclaimer and "do not try this at home".

The main issue is that capacitors keep their charge after unplugging, so you might have some that are still dangerously charged.

I remember when opening old CRT TVs they had a big notice inside about discharging them before servicing some parts of the circuit.


A good GFCI, safety equipment and respect go a long way.

Especially the first one saved me more than a dozen times at this point. Yay for required-by-law GFCI on the house-level.

I also do recommend to get some Schuko (CEE 7/3 and CEE7/4) or UK plugs and sockets, even if you're in the US just so you can avoid the safety nightmare that is the US plug. (Thuogh I'm not sure if you can do them as permanent installs in the US... they're still neat for lab equipment)

edit: you should have respect for anything about 50V or so, after that point it can be quite dangerous. 120V is way above where i start using safety equipment.


No GFCI will save you on the secondary side of a transformer. My point is: working with standard 120/230V requires just following codes and guidelines. Tinkering with HV in non-standard setups requires understanding of how and why.


The secondary side of a transformer can be a lot safer if there is no potential to ground. If there is the GFCI will trigger.

But yes, working with codes and guidelines is the best option here.


There is always potential for ground. The secondary side of the transformer has a much more dangerous potential: a unknown fault puts something at ground, and then thinking there is no ground you touch ground and the other side. Normally that would be harmless, but since there is an unknown ground it is deadly.

In many ways mains power would be safer if there was no ground, and a lot cheaper. However a couple failure cases are even worse without ground, and they are the type you only find out about when somebody dies. Thus we put ground in houses.


You might get silly and insert more than one hand into a working device.


I don't get silly above 50V.

edit: also, in case you put two hands in a device, it won't matter much if it's on the other side of an isolating transformer or not. The GFCI might not trigger in this case.


> also, in case you put two hands in a device, it won't matter much if it's on the other side of an isolating transformer or not. The GFCI might not trigger in this case.

That was my point.


You almost killed yourself over a dozen times, but your advice to the inexperienced person above is a few safety tips, not much better “just be careful”.

The only reasonable advice is to not even touch the stuff unless you’re an expert with plenty of training. Especially since there’s nothing to be gained except satisfying a useless curiosity.


I almost killed myself over a dozen times and yet I'm still standing. If you don't take advice about handling high electricity from someone who experienced it a couple times, from whom do you take it then?

I will gladly touch stuff to change my lightbulb, I'm not gonna consult an expert for that. Or swapping fuses and other, similarly, simple procedures that are reasonably safe if you are "just careful".

Useless curiosity is where the majority of human progress comes from.


Where do you see advice of "just be careful"?


Without useless curiosity HN wouldn't exist...


Waay too boring following the safety police rules.

Playing with HV can be fun, just build fly swatter-level inverters instead of using a transformer/capacitors big enough to kill you.

A CCFL inverter from an LCD screen is a good start, you can light neon bulbs and gets ozone


those are dangerous too--they output high frequency AC which can cause burns. with a thumb-sized CCFL supply i once burned a tiny hole in my finger. never bled a drop since the current cauterized it but it hurt for a week.


You can burn yourself with a soldering iron or splashed solder too, better avoid electronics all together.


The fact that you can burn yourself with soldering iron is mostly obvious. That CCFL inverter can easily burn hole through you finger without you noticing it until it's too late is not that obvious.


You'll live.


In both cases with overwhelming probability. But there is difference: you have to be extraordinary clumsy to get life-threatening burn from soldering iron, but you can get life threatening burn from HF HV very easily.


Not from a CCFL inverter, you won't.

This thread serves as a great example of why it's not a good idea to plaster dire warning labels all over everything on the planet "just to be safe." When everything is dangerous, nothing is.


50 Volts and the right conditions can kill you just fine. Always be careful.


you've got to be very careful and never use HV without treating it with respect. i use the one hand rule extensively, keeping my left hand firmly tucked behind my back while the supply is turned on.


1000V, at the current levels needed for this application, amounts to the juice behind a good carpet shock.


Dunno about the Monoscope in particular, but a lot of tubes need several milliamps, which is quite a bit more painful than a good carpet shock. Not fatal, but not something I'm eager to experience. And I'm the guy who used a one-henry inductor to administer electric shocks to people in high school — including, repeatedly, myself.

The issue is that it's a lot easier to build or buy a kilovolt power supply that doesn't have adequate current limiting than one that does. And even one that has it in theory may not have it in practice — that big capacitor across the output? Make sure it isn't just wired directly to the output terminals, because its ESR sure as hell isn't going to be adequate current limiting.

So I think it's reasonable to be wary of kilovolt circuits. Dying is easy, but you only get to try it once.


Lightedman's dead comment, in response to my, "The issue is that it's a lot easier to build or buy a kilovolt power supply that doesn't have adequate current limiting than one that does," said, "You can buy Van de Graaf generators all day long. Quarter-million volts minimum for $200."

While van de Graaff generators are indeed relatively affordable, and indeed some even cost less than US$200, a working microwave oven costs US$60, and a broken one can be had for under US$10. Furthermore, a safe van de Graaff generator is not actually capable of supplying enough current at the high voltage to operate a vacuum tube, while a deadly microwave-oven transformer is; and there are orders of magnitude more microwave ovens available.


A KV from a half decent supply will throw you across the room in a bad case and give you a nasty burn in a slightly better one. Don't underestimate the power of a HV supply, especially one that has some nice stabilization and caps.


my power supply is capable of 1KV at 10mA. it's the one piece of lab equipment i own that genuinely scares me. all this stuff has to be treated with respect. still though it's good for cool experiments with monoscope tubes. :-)


In other news, you don't need 100 watts worth of HV to run a CRT.


> video amplifier I used had a gain-bandwidth product of only 4.5MHz.

That's .. not really what I'd call a video amplifier, unless the amplifier itself is also made out of tubes.


It's tubes all the way down. A series of them, if you will.


good enough for NTSC or PAL monochrome composite video.


Without any gain. What is typicaly meant today by video opamp has GBW in low hundreds of MHz.


Original catalog entry / advertisement can be found here on page 18:

http://informationdisplay.org/Portals/InformationDisplay/Iss...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: