Hacker News new | past | comments | ask | show | jobs | submit login
Tunguska: Ternary Computer Emulator (2008) (sourceforge.net)
105 points by based2 on Jan 2, 2021 | hide | past | favorite | 62 comments



I remember playing with my DefCon 18 badge and thinking to myself, "This looks like ternary. I wonder what it does." After a few minutes groking the (provided, but obfuscated) badge source code, I plugged some values into bc (after setting obase=3), tried them forward and backward (both inverted and normal) and got the "Ninja Party Unlocked!" message on my badge. If only I had known the secret location of the party, I would have tried to get in.


Somewhere down tonight's ternary rabbit hole I found this wonderful Scientific American article, Third Base[0]:

The Jewel in the Triple Crown

“Perhaps the prettiest number system of all,” writes Donald E. Knuth in The Art of Computer Programming, “is the balanced ternary notation.” As in ordinary ternary numbers, the digits of a balanced ternary numeral are coefficients of powers of 3, but instead of coming from the set {0, 1, 2}, the digits are –1, 0 and 1. They are “balanced” because they are arranged symmetrically about zero. For notational convenience the negative digits are usually written with a vinculum, or overbar, instead of a prefixed minus sign.

[0] (pdf) http://bit-player.org/wp-content/extras/bph-publications/AmS...


Looking at the documentation, it reads as if the architecture has a proper clock that synchronizes all computation. Furthermore, an emulator would naturally have a global notion of time, synchronously updating all internal states in a linear sweep.

However, in hardware this model could be made clockless / asynchronous, such that computations go as fast as they can. The extra state of "unknown" can be interpreted as "not all required inputs are ready yet". Some care would be needed to buffer loops in a recurrent circuit, but that's about the only extra complication.


> However, in hardware this model could be made clockless / asynchronous

I know you didn't imply otherwise, but it's worth pointing out that ternary logic is by no means required for asynchronous hardware design. The same effect is possible with dual rail channels using two wires per bit, or even fewer on average using Sperner codes and the like.

> computations go as fast as they can

Although asynchronous circuits might not have to wait for a clock, they do have to wait for handshake completion detection in various forms. It's more like a trade-off than a free lunch.


Indeed the two binary rails approach corresponds to a logic as well, namely Belnap's A4: https://en.m.wikipedia.org/wiki/Four-valued_logic


Yep, which is exactly what I did here:

https://core.ac.uk/download/pdf/82751815.pdf


This project appears to be loosely inspired by the Setun series of computers made in the Soviet Union. I can't find any information in English about those being asynchronous designs. The choice of ternary appears to have been made to avoid the signed/unsigned distinction.


I don’t think it will work with trinary numbers though, as in his design “unknown” is treated as “zero”.

So if there is an adder which outputs “0”, there is bo way to tell if the computation was done and result was zero, or if it is still in progress.


Agreed, not with this particular design. However, you can absolutely do this just ternary numbers. You loose the ability to encode 3^n states (and are stuck w/ just 2^n), but the middle state can then be repurposed to indicate if the computation is complete (or not) as of yet.

I basically used this sort of design in a dual-rail logic framework which had 4 line-states per "data" line pair, two of which were real data bits, a third indicated unknown vs. unknown, and the fourth was unused.

See https://core.ac.uk/download/pdf/82751815.pdf, but beware that even though it doesn't look at all like ternary numbers in use, it absolute is.


What applications would be best suited to use this? I can't think of anything apart from scientific work that would benefit from such a specialized design.


None. The point of the paper was the proof technique, which was later extended to include a large family of constrained motion planning problems (i.e., problems that previously were of an unknown computational complexity could be easily mapped to the framework, thereby proving that they were PSPACE-complete).


Thinking about Ternary is good at exposing the binary bias we have most programming [languages] (eg., we could have to come with with new concepts to replace &, |, ~, << etc).

Worth remembering that the ENIAC was a decimal design but after careful consideration, von Neumann (and pals?) decided that using binary would be more efficient. (Decimal was popular for many reasons).

Binary has great noise immunity and all other coding schemes are inherently worse. It's also a lot simpler to make binary gates than ternary.


> we could have to come with with new concepts to replace &, |, ~, << etc

True, but I imagine it wouldn't impact typical high-level code that much.


Even the states of the ternary computation seem to be up for grabs [1]. My first feeling was that the balanced ternary states would be nice, but as you start building trytes (i.e numbers) the representation doesn't seem great.

I couldn't find anything after a very quick search, but if not already done, I want to start referring a ternary bit as a 'tit'!

[1] https://en.wikipedia.org/wiki/Ternary_computer#Types_of_stat...


It's actually a trit[1], but I like your thinking. :-)

[1] https://en.wikipedia.org/wiki/Trit_(computing)


I always found Charlieplexing[1] an interesting real-world use of tri-state logic.

[1] https://en.wikipedia.org/wiki/Charlieplexing


For sure! I remember building this design into a PCB and watching the brains of the inexperienced engineers explode! If you use high/low/input, you can even divide up the input using resistors to get even more input states.


Knuth talks about this jokingly in TAOCP not expecting someone would actually do it.

I looked at the screenshots and it's not obvious to me how they map back on to ternary. Like how do you have a hex editor when 16/3=5.333?


The only reason we use hex is because we have binary. With ternary you would maybe use 12 as a base. That's better then 16 anyway since 12 can be divided by 2, 3, 4 and 6.


Hex is 4 bits wide. Base 81 might be a good analog, 4 trits wide, if a bit unwieldy.


I think, based on this screenshot, it uses some form of nonary encoding: http://tunguska.sourceforge.net/shots/tunguskashot.png


Practical computers based on ternary architectures[1] have been built in the past.

[1]: https://en.wikipedia.org/wiki/Setun


Yes, it mentions that computer in the footnotes of the Russian translation of TAOCP.


Here's a great paper:

"Brousentsov's Ternary Principle, Bergman's Number System and Ternary Mirror-symmetrical Arithmetic" by Alexy Stakhov:

https://pdfs.semanticscholar.org/b946/f8ec0c0ffa12427d73940c...

Also available here (but apparently behind a registration/paywall of some sort...):

https://www.researchgate.net/publication/220458145_Brousents...

Or here:

https://ieeexplore.ieee.org/document/8140230

Or, just Google for "Brousentsov's Ternary Principle, Bergman's Number System and Ternary Mirror-symmetrical Arithmetic".

Keywords: Ternary Symmetrical Number System, Ternary Tau, Base Phi, Golden Ratio, Fibonnacci, 3-Valued Logic, Number Systems With Transcendental/Irrational Bases, etc., etc.


ISTR that ternary computers require a three-phase AC power supply. Is this true?

I'm not sure where I picked up the notion (and a cursory search doesn't seem to turn up any supporting evidence), so it's possible that I absorbed this idea from technobabble in some mid-80s or earlier SF novel or TV show.


Your computer's components (other than the power supply) never get in contact with the AC. Components are powered by DC that comes out of the power supply. Laptops running on battery are completely powered by DC in a way. I'm not sure why would that differ in ternary computers.


It may be that they would utilize split power rails, and one way of making a split rail power supply is to use a center-tapped transformer, like many linear power supplies for audio amplifiers.


That sounds like technobabble. I can't imagine a computer relying on AC, and in any case you can create whatever you need using transistors. After all that's how 12V inverters do it. Heck here's a 12V to three-phase inverter[1] as a random example.

[1]: https://www.sigineer.com/product-category/inverter-chargers/...


This may be the next phase of computing to keep Moore's Law on track. (Yes, I know it's not actually a law.) Flash memory has been using multi-state cells for a decade or more. All we need is a new ternary logic family (and FPGA/ASIC foundation library), and some good designers that aren't hung up on using only boolean logic.


How do you propose to design physical gates that would handle ternary logic, in a way that is more efficient than the current boolean implementation?


Good question. After thinking about it for 30 seconds, I would say add a negative power rail go from there. All gates are analog circuits at the lowest level, so it would not be very difficult to come up a circuit design.

I think the main obstacle would be the lack of ternary logic designers. VHDL/Verilog could probably be extended/enhanced to support ternary logic, but there would be a learning curve for the designers.


I don't think that the problem is with the lack of designers.

There are companies making specialized chips all the time, so if there were a way to create an efficient ternary circuits, then someone would have made one. For example, AI/neural networks do not care about base (they just need math), and they only need a few types of cells to be useful. But I have not heard of any applications of ternary there.

Intuitively, I assume we'd still want to drive transistors to saturation, so we are not dependent on process parameters. Which means that each transistor still only handles two states, and we need two transistors for each trit. But if we have two transistors, we might as well use them for 2 bits and get even more state out of them.


Not claiming it's efficient, but here's a description of how the Z80 does tri-state output: https://baltazarstudios.com/anatomy-z80-gate/

I'd be curious how modern fpgas do it.


Modern FPGAs do it pretty much the same -- there are high and low drivers, and one of them gets activated at a time.

Note that this link is tri-state, not ternary. The "tri-state" is still binary, the third state is to allow attaching multiple outputs to the same bus. Or in the C pseudocode:

Regular binary:

     enum { FALSE, TRUE } value;
     value = result()
Ternary:

     enum { UNKNOWN, FALSE, TRUE } value;
     value = result()
    
Tri-state (as Z80 and every modern computer does it):

     enum { FALSE, TRUE } value;
     if (enabled())
        value = result()
(the idea is you might have multiple components driving the same "value", and only one of them should be enabled at a time. Bad things happen if more than one is enabled at a time, or none are)


That's helpful, though googling around a bit suggests both the Z80 (and popular peripheral chips) and FPGA/CPLD devices do have non-bus, single pin abstractions that can output and input 3 states with a single pin.

But replicating that with an ATMEL or similar generally requires dancing around with resistor ladders or similar.


Can you tell me more about how can Z80 (or popular peripheral chips) "can output and input 3 states with a single pin"? Or maybe give some web links? I am pretty familiar with that chip, so hearing this surprises me.

For the output side, I guess I you can tri-state the output, but this will by default keep the previous voltage, you will need some resistors to actually emit a voltage outer than "low" or "high". And Z80 cannot even control this on per-pin basis, since it has no integrated GPIOs.

For the input side, I have no idea how would Z80 get more than two states. All the peripherals I know of (like 82C55 GPIO chips) also only provide two input levels as well.

The only way to get 3-state input into a system I can think of are:

- ADC input followed by software thresholding (this will be pretty slow, and works with hundreds of states, not just 3)

- Use external voltage comparators to split a single input wire into multiple input pins. This no long is "single pin" though.

- Some FPGA/CPLDs have multiple banks with changeable input thresholds. This lets you avoid external comparators. You'd still need multiple pins for each input wire.


Apologies, you're right. It was my own misreading bits of code where a chip was inferring its own pin state (1/0/hi-z) based on previous calls or flags and what it should be. Not inferring it externally.


Firstly, how exactly would this work? Does the transistor itself become ternary?

Secondly, Jim Keller paints a fairly convincing picture that (for some definition of Moore's law) there is definitely a long way to go yet.


Jim Keller is obviously extremely intelligent, but it's also in his best interest (at least it was at the time when he made his most recent comments about Moore's law) that people think Moore's law is still in full-force.


Why? He was at Intel so wouldn't him saying Moore's law isn't dead expose their process failings even more?


Yet it's still in force. Not scaling quite as quick, but still exponential growth in the number of transistors.


A transistor is an analog device. Digital logic usually has them operating in either an "on" or "off" state, and this could still be the case if a second power rail was added.


Why stop at ternary?


No efficiency advantage to go more than three:

https://en.wikipedia.org/wiki/Radix_economy


Wow, this is fascinating, thanks for the link.


No good reason I can think of. Flash memory manufacturers began with SLC, but have been evolving to higher and higher cell densities.

https://www.howtogeek.com/444787/multi-layer-ssds-what-are-s...

Also, early computers were not all based on binary. E.g. The ENIAC was base-10. https://en.wikipedia.org/wiki/ENIAC


Nope, ENIAC was binary, it still had bits: https://wikis.olin.edu/ca/doku.php?id=2014:vacuum_tube_sr_la...

What it did not have was base-2 bytes. It's bit encoding algorithm had "prohibited" states to ensure all numbers are in base 10.

But each individual element in the machine only had 2 states, ON or OFF.


Your comment seems to be in conflict with this source: https://www.seas.upenn.edu/~jan/eniacproj.html


Our vocabulary is influenced by modern computers, where we use every possible bit state. The ENIAC does not really has this, so the definitions are ambiguous. For example, if you are talking about SLC flash or SETUN ternary (which actually have multiple voltage levels), then ENIAC is binary, as it is operating on only two voltage levels. But if you are talking about base-2^n math of the modern computers, then ENIAC does not have it either.

In other words, a binary computer will have 2 possible bit states, 2^n possible byte values, and 2^(n*m) possible word values. For example, Z80 will have 2^8 byte values and 2^16 word values.

A ternary computer will have 3 possible bit states, 3^n possible byte values, and (3^n)^m possible word values.

ENIAC has 2 possible bit states, but a byte has only 10 possible values (which means that some combinations of bits do not form a valid byte). And a word has 10^m possible values.

For more details, let's look at "A REPORT ON THE ENIAC (1946)" [0] from 1946. It is pretty interesting.

The data transmission is in unary(!), see section "1.1.3. Representation of Digits by Pulses". There is a wire per byte which can have 2 voltage levels, and number of pulses corresponds to a number transmitted.

The logic is classical binary, see "1.2.1.3. Gate Tube":

> A gate tube emits a negative signal when both of its input grids are brought from a negative cut off voltage to a positive voltage. Thus, a gate tube is used to note the coincidence of two positive signals and hence corresponds to the logical "and".

The data storage is in binary (or at least bi-state?), see section "1.2.2.1. Flip-Flops". Quote:

> A flip-flop consists essentially of a pair of triodes so connected that at any given time only one of the pair can be conducting. When a certain one of the tubes is conducting (and the other is not), the flip-flop is said to be in the normal state; when the other tube is conducting (and the first is not), the flip-flop is in the abnormal state.

The "byte" counters have ten bits (flip-flops), but only one can be on at a time. so the numbers are stored as 0000000001, 0000000010, 0000000100, etc...

https://ftp.arl.army.mil/~mike/comphist/46eniac-report/chap1...



It would probably make the computer slower due to (further) implementation issues. Going from unary to binary computers provides an exponential speedup (this can be seen by having various kinds of Turing machines emulate each other), but further increases do not help that much.


I just wrote a very detailed comment on this thread on why ternary computers aren't a thing, probably can't be a thing, and thus will probably never become a thing.

You might want to read it: https://news.ycombinator.com/item?id=25618303


"2. By theorem, all bases are equivalent in power with respect to their capabilities of encoding information and processing/computing said information, and binary logic can be used to bootstrap any other multi-valued logic system desirable (goes for ternary logic and 64-bit integer computation alike). Thus, the decision to make a base-3 falls back pretty much entirely on circuit efficiency"

This reminds me of Alan Perlis' famous admonition to "Beware of the Turing tar-pit in which everything is possible but nothing of interest is easy."

There are many programming languages which are Turing complete, and so are interchangeable as far as their computing power goes, but as you know they don't tend to compete on efficiency alone.

People want to use the right tool for the job, and many other considerations come in to play.


I think ternary computers are an interesting thought excercise, but I've heard on multiple occasions people making serious comments about "ternary computers would be a revolution in computing/efficiency/<something>", and it always sounded like crackpot speculation to me.

I'd be interested in hearing a sophisticated counter-argument if anybody has a good one. Here's my take on this:

1. Binary is that smallest base in which we can encode Shannon information, and thus yields to "bits" a more elegant foundation for computation. Choosing any other base sounds like it would be a very arbitrary choice, and to the best of my knowledge, the only other canonical base for computation that inherently makes sense (as in, not arbitrary) aside from the `bit` is the `nat`[0], but since it uses Euler's constant (a transcendental number) it would be (ironically) very unnatural to express "computation" in it.

2. By theorem, all bases are equivalent in power with respect to their capabilities of encoding information and processing/computing said information, and binary logic can be used to bootstrap any other multi-valued logic system desirable (goes for ternary logic and 64-bit integer computation alike). Thus, the decision to make a base-3 falls back pretty much entirely on circuit efficiency ...

3. The claims of computational efficiency, as far as I've been able to observe, are fallaciously constructed by studying their performance characteristics in abstract computer models that are completely detached from computational thermodynamics:

3a. Turing-machines, for example, fail to adequately model real-world constraints due to information locality being one-dimensional (while the real world is 3D). Von-Neumann computers (and other RAM based machine models) are IMO the largest infractor of this — there is no such thing as "RAM" in the real world: you can't access any address space irrespective of locality in O(1), such a computer would violate relativistic physics and thermodynamical constraints, RAM can only exist by making all memory access equally inneficient and mitigating that as much as possible with L3 caches and what not. Semiconductor engineers and chip designers inherently understand this and are aware of die-locality, most computer scientists and software engineers are completely oblivious to this and get to ignore it most of the time [1]

3b. This means that even though anyone can theorize an abstract machine where every discrete "tick" of time will perform a ternary operation in constant time, that doesn't mean such a machine can be efficiently reified if you move it out of the realm of abstraction. We could just as easily theorize a computer that uses base-1T (one trillion), and assume that all "trilliernary" operations execute instantly in discrete ticks of O(1), and voila! We have a computation device that can rival all of Google's data-centers!

3c. In the real world however, there are thermodynamic constraints on how information can "flow" across spacetime, and I have seen absolutely zero evidence that a ternary, trilliernary, or whatevernary logic gate can be constructed without introducing many new voltage levels or other circuit fuckery to counteract this, in a way that wouldn't completely wreck any purported efficiency. It's either that, or we'd implement whatevernary logic using binary gates (which is exactly what 64-bit integer arithmetic already is!)

4. Bottom line and TLDR is this: You can't get more computation[2] for free. Welcome to the real world.

---

[0] https://en.wikipedia.org/wiki/Nat_(unit)

[1] This is especially painful to watch when they are forced to deal with it, and these optimistic-assumptions clash with reality, as evidenced by NUMA bottlenecks wrt RAM/RDMA/GPUs/Disk access, a collective failure of most engineers to understand the CAP theorem or other concepts related to networking, and so on...

[2] And by "computation" I mean it in the thermodynamical, relativistically-physical, information-theoretical kind of way. Physics is a thing we forget about way too often in computer science...


This seems to be a reply to something, but it nos not clear to what? I saw no efficiency claims of trinary on the Tunguska page, so this seems unrelated to it.

That said, I don't understand how you arrived at this:

> anyone can theorize an abstract machine where every discrete "tick" of time will perform a ternary operation in constant time, that doesn't mean such a machine can be efficiently reified if you move it out of the realm of abstraction.

One can definitely create a circuit which will perform a ternary operation in a (some) constant time. It is not hard, and people have done this before [0]. The real question is: can you make a ternary computer which is somehow better (faster, power efficient, cheaper, etc...) than a binary one? But to answer this you need to talk MOSFETS, voltage levels, gate charge, and current tecnhology capabilities. Shannon and Von-Neumann won't help you here.

[0] https://hackaday.io/project/6284-tern-ternary-logic-circuits


> This seems to be a reply to something, but it nos not clear to what? I saw no efficiency claims of trinary on the Tunguska page, so this seems unrelated to it

Sorry if it seems like it's just a floating comment, indeed the article made claims of the kind, and I really think the project (and others like it) is nice. My comment is directed towards the practical concerns and real-world feasibility of a ternary computer, especially with regards to comments that threads like this inevitably attract (as seen on this comment section) that "ternary computers would revolutionize X", and is partly fueled by me having had non-fuitful conversations with coworkers that insisted on this point in the past. Hopefully this helps contextualize my comment a bit better.

> That said, I don't understand how you arrived at [point 3b]. One can definitely create a circuit which will perform a ternary operation in a (some) constant time. The real question is: can you make a ternary computer which is somehow better than a binary one?

I think you and I are in agreement here, so let me try to clarify what I meant:

My point is precisely regarding the arbitrariness with which "constant time" is defined. Asymptotic analysis of algorithms always treats complexity in the "time" dimension as a measurement of discrete "ticks" on computational state, meaning: you can pick any encoding of data you want, and any set of operations you want, and have any "operation" execute instantly as a tick. When analysing "ticks", an algorithm might seem extremely efficient, however that might be a non-reifiable performance gain when you translate from ticks to milliseconds.

For example: assume a binary NAND gate takes 1ms to propagate a stable output signal. All other conditions being equal with respect to manufacturing technology, voltage, and other things you mentioned, I don't see how you could perform "more" computation at the same cost. Sure you can implement a ternary gate that runs in "constant time", but would that constant time still be 1ms? If yes, would it take up the same amount of die space? If yes, would it consume the same amount of power and dissipate the same amount of heat? etc

You'll eventually be tickling the Landauer limit [0] for whatever your manufacturing technology is at some point, no matter if you're doing binary or whatevernary logic, but getting "more compute" for free would be cheating thermodynamics. Improving manufacturing technology is, of course, a viable strategy for squeezing "more computation" into the same physical space, but switching bases will not immediately make Moore's law wither away.

A more succinct way of putting what I'm trying to say into words I guess would be that "all other things being equal, the choice of base is arbitrary with regards to computational and representational power"

Hopefully this helps clear things up :)

[0] https://en.wikipedia.org/wiki/Landauer%27s_principle


Um, are we talking about the same "asymptotic analysis"? The one I am talking, the "Big O" notation [0], specifically ignores constant factors. So a naive matrix multiplication algorithm will always be O(n^3) in standard notation, no matter if this is binary or ternary or even ENIAC-style BCD bytes. This means you cannot reason about ternary vs binary using asymptotic notation.

And the Laundauer limit seems completely unrelated. Your link says, "Modern computers use millions of times as much energy per second" -- so we can have 1000x more efficient computer and still be well under the limit.

So the only way to compare the binary vs ternary is in the context of a specific implementation, because that's how you can evaluate the speedup. "All other conditions being equal" is never true -- for example, binary machines can run faster because they need less voltage swing (and thus less bit charge), while ternary machines are faster because balanced adders do not need ripple carry. Which one has more effect? This depends on technology.

Here is a made-up example: let's say we have a small amount of ancient transistors and diodes, and we want to sum up numbers, with range from -5000 to 5000. The transistor switches in 0.1 mS. How do we do it?

In binary, we'd need 14 bit word. Assuming we do ripple-carry adder (it is smallest part count, but slowest), in the worst case we'll need to propagate carry across all bits, and this means we'd need at least 20 mS per operation, getting performance of at most 50 addition/second.

In ternary, we'd need 9-trit word. Ternary adders do not propagate, so we could add the numbers in 2 mS. Here, the ternary is going to be much faster, at 500 addition/sec.

This is a very contrived example (who makes wide ripple-carry adders anyways?) but hopefully this illustrates my point. You cannot use asymptotic analysis for this, and there are times when you can totally perform more computation for the same cost.

[0] https://en.wikipedia.org/wiki/Time_complexity


Mind that the historical SETUN wasn't built around active switching elements, but was rather based on a ferrit cores architecture.


This may be of interest:

Can Ternary Computing Improve Information Assurance? https://www.mdpi.com/2410-387X/2/1/6/htm


> elegant foundation for computation.

-1, 0, +1 has as elegant symmetry that will inspire fascination until either someone can prove its inferiority with some heretofore unimagined model of computing or someone builds a clearly superior device based on it. I strongly doubt such a device is possible with contemporary two dimensional etched semiconductors; too many people have been torturing conventional mosfet circuits for too many decades to think they've left much on the table. But who knows what the future will deliver? Maybe we'll start computing with proteins next year and somehow 3 (or more) levels will make sense.


If we use third state as "spacer", then we will be able to use every single line connection as serial connection, thus massively parallel, clock-less logic will be possible.

Instead of clocked design:

  data0: 000001111111111111111111111111111000000000
  data1: 000000000000000000000000000000000000000000
  data2: 000111111111111111111111111111100000000000
  ...
  clock: 000000000001111111111100000000000000000000
We will be able to use clockless design:

  data0: --111---
  data1: ---000--
  data2: ----111-
Or even:

  data: --111---000---111---
(1 represents high signal, 0 - low signal, - - neutral signal).


I'd upvote thrice if I could ;) there's so many ways to utilize ternary, on different levels, you could use this kind of ternary hardware serialisation as an addon to normal binary stuff, not only base-3 computing, for example if -1 is the "empty" then I'd imagine it'd make it trivial to interface with, and emulate binary systems (so a ternary cpu would not need take a hit for binary emulation, only the architecture), which could be the gateway to get people interested in it initially, without needing 100% new everything, etc... just my uneducated guess tho...




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: