Hacker News new | past | comments | ask | show | jobs | submit login
Base 3 Computing Beats Binary (quantamagazine.org)
162 points by Tomte 4 months ago | hide | past | favorite | 160 comments



The concept of radix economy assumes that hardware complexity for each logic element is proportional to the number of logic levels. In practice this isn't true, and base 2 is best.

>Ternary circuits: why R=3 is not the Optimal Radix for Computation

https://arxiv.org/abs/1908.06841

Previous HN discussion:

https://news.ycombinator.com/item?id=38979356


I like this article but it does kind of seem like it gets to a point of “well we know how to do binary stuff in hardware real well, we don't know how to do ternary stuff that well and doing it with binary components doesn't work great.”

Also ternary gets a bit weird in some other ways. The practical ternary systems that the Soviets invented used balanced ternary, digits {–, o, +} so that 25 for example is +o–+,

   25 = 27 + 0*9 – 3 + 1.
If you think about what is most complicated about addition for humans, it is that you have these carries that combine adjacent numbers: and in the binary system you can prove that you relax to a 50/50 state, the carry bit is 50% likely to be set, and this relaxation happens in average by the 3rd bit or so, I think? Whereas ternary full adders only have the carry trit set ¼ of the time (so ⅛ +, ⅛ –) and it takes a few more trits for it to get there. (One of those nice scattered uses for Markov chains in the back of my head, the relaxation goes as the inverse of the second eigenvalue because the first eigenvalue is 1 and it creates the steady state. I got my first summer research job by knowing that factoid!) So you start to wonder if there's something like speculative execution possible then—half-adder-+ is definitely too simple for this but full adder + chains all these bits together and for larger numbers maybe it's not!

Similarly I think that binary proliferated in part because the multiplication story for binary is so simple, it's just a few bitshifts away. But for balanced ternary it's just inversions and tritshifts too, so it has always felt like maybe it has some real “teeth” there.


In terms of implementing adders, the standard solution for binary logic adders is carry lookahead adders. Perhaps an equivalent could be built in ternary logic?

https://en.m.wikipedia.org/wiki/Carry-lookahead_adder


But it is true for one-hot encodings, yes? I may be showing my age here, but when I last took a computer architecture course domino logic (and self-clocking domino logic, even) was seen as something between the cutting edge and an obvious future for high speed data paths. No idea of this is still true, but it seems that something like domino logic would extend cleanly (and with cost linear in states) to ternary.


"In addition to its numerical efficiency, base 3 offers computational advantages. It suggests a way to reduce the number of queries needed to answer questions with more than two possible answers. A binary logic system can only answer “yes” or “no.”"

Yes... but nobody uses binary systems. We use 64 bit systems (and a number of other systems, but all larger than one bit), which has abundantly more than enough space to represent "greater than/less than/equal".

The main software issue with ternary computing is that with this, the entire advantage goes up in smoke. It is quite hard to articulate an actual advantage a multi-bit system would see since we do not use 1-bit or 1-trit systems in real life. (If you've got some particular small hardware thingy that could use it, by all means go for it; it's no problem to use it in just one little place and have a conventional binary interface.)

Taylodl's hardware issue with ternary circuits sounds like a reasonable one as well. If you've already minimized the voltage difference for your binary circuits to as small as it can reasonably be, the addition of ternary hardware necessarily entails doubling it the maximum voltage in the system.

Is this Quanta Magazine's "Brash And Stupid Claims About Computing Week" or something? https://news.ycombinator.com/item?id=41155021 The last paragraph is outright crockery, that "trit-based security system" is well known to be from a crackpot who appears to simply not comprehend that our binary systems do not in fact explode the moment they have to represent a "3", despite repeated attempts for people to explain it to the guy.

Programmers and hardware designers are not in fact just total idiots stuck in the mud about digital (vs analog) and binary (vs ternary) representations. They are the way they are for good and solid engineering reasons that aren't going anywhere, and there is basically no space here for someone to displace these things. It isn't just path dependence, these really are the best choices based on current systems.


Ternary is one of several crackpotry schools that were established in USSR. You'd have them write books on the subjects, rant in tech magazines… there even was an SF short story about evil alien robots defeated by ternary.

Another big thing was TRIZ: a theory that you can codify invention by making a rulebook and arranging the rules in different permutations. There were plenty of smaller things too, especially in academia. It would typically start with one researcher sticking to some bizarre idea, then growing his own gang of grad students and adjuncts who all feed on that. Except the theory would be borderline batshit and all the publications are in-group, referring each other, and naturally a semester or two worth of this sectarian stuff is getting dropped into the undergrad curriculum.


During most of the time USSR existed, computer electronics were away from the optimum enough that ternary logic was competitive with binary.

It was just at the late 80s that this changed.


I doubt it was competitive at any point. Setun had not demonstrated any practical edge in its generation and noone tried to reproduce it afterwards.


> a theory that you can codify invention by making a rulebook and arranging the rules in different permutations.

You can. It's just slow that's all.

Superoptimizers 'invent' new compiler optimizations by exactly this technique.


TRIZ previously on HN https://news.ycombinator.com/item?id=18045322 (and a few others)

and https://en.m.wikipedia.org/wiki/Systematic_inventive_thinkin...

then https://arxiv.org/abs/2403.13002 (AutoTRIZ: Artificial Ideation with TRIZ and Large Language Models)


This article contains content that is written like an advertisement. (June 2020)


TRIZ is not bizarre or strange. It is a series of concepts and ideas which are meant to help you get unstuck when working through a difficult engineering problem.


I know what it meant for but the evidence for its efficacy is thin.


Sounds like a good description of the current state of affairs in the academia, though.


Fringe theories in mathematics sometimes work out. Neural nets is arguably one of them: For the longest time, neural nets were simply worse than SVMs on most metrics you could think of.


>A binary logic system can only answer “yes” or “no.”

This line really struck me and it's a failure in technical writing. This is an article published on "quantamagazine", about niche computing techniques. You have a technical audience... you shouldn't still be explaining what binary is at the halfway point in your article.


Based


I loved the penultimate paragraph; gave me a hearty laugh and a fun rabbit hole to waste time on :)


> Why didn’t ternary computing catch on? The primary reason was convention. Even though Soviet scientists were building ternary devices, the rest of the world focused on developing hardware and software based on switching circuits — the foundation of binary computing. Binary was easier to implement.

That's not the way I've heard the story. Ternary isn't just on/off, voltage yes/no like binary - you need to know the charge of the voltage: is it positive or negative? Essentially then your circuits are -/0/+ instead of (0/+) like it is for binary. Such ternary circuits resisted miniaturization. At a certain point the - and + circuits cross and arc and create a fire. The binary circuits kept getting miniaturized.

The story I've heard that goes along with this is that's how the US ultimately won the space race: the US bet on binary computing and the Soviets bet on ternary computing. The Soviets lost.


I always thought the information density advantage of ternary disappeared once you put it in the real world:

- harder to discriminate values as you pack more in the same dynamic range

- more susceptible to noise

- requires lower clock rates for (wired) transmission

- requires more complex protocols for (wireless) transmission

- representing 0 or 1 as switched states uses "no" power, but getting a switch to be "halfway on" uses a lot of power


But most transmission systems use multi-level signals. Gigabit Ethernet uses PAM-5, the latest versions of PCIe use PAM-4, and USB 4 uses PAM-3. Not to mention the crazy stuff like QAM-256 and higher.


This doesn't make sense to me. You don't have to use negative voltages to encode ternary. You can just use three different positive voltages, if you like. 0V = 0, +3V = 1, +6V = 2.


The main problem is that if you are minimizing the voltage to the minimum that can be safely distinguished for binary, you must, by necessity, be introducing twice that voltage to introduce another level. You can't just cut your already-minimized voltage in half to introduce another level; you already minimized the voltage for binary.

50 years ago this may not have been such a problem, but now that we care a lot, lot more about the power consumption of our computing, and a lot of that power consumption is based on voltage (and IIRC often super-linearly), so a tech that requires us to introduce additional voltage levels pervasively to our chips is basically disqualified from the very start. You're not sticking one of these in your server farm, phone, or laptop anytime soon.


Indeed that's how nearly all NAND flash works nowadays, early SLC media was binary with each cell set to a low or high voltage, but as density increased they started using more voltages inbetween to encode multiple bits per cell. The current densest NAND uses 16 different positive voltage states to encode 4 bits per cell.


So each cell is a nibble. That's cool


> The current densest NAND uses 16 different positive voltage states to encode 4 bits per cell.

Wait, what?! I did not know this. Is there a link where I can learn more?


https://www.anandtech.com/show/5067/understanding-tlc-nand

That's an older article from when TLC (3bit) NAND launched, but the principles are the same for QLC (4bit).

Nowadays SLC and MLC are barely used because TLC is good enough for nearly all purposes.


Most "SLC" and "MLC" that's sold is actually TLC/QLC hardware that's only using a subset of the voltage levels available. It ends up being significantly cheaper due to the economies of scale in manufacturing.


Yep, if you ever come accross the term "pSLC" (Pseudo-SLC) that means the underlying hardware is capable of running in MLC/TLC/QLC mode, but the controller firmware has been configured to only use the lowest and highest voltage state in each cell.

Some SSD controllers will also reassign regions of flash to different modes on the fly, the drives unused capacity can be used internally as a very fast pSLC write cache, and then as the drive fills up those cells incrementally get switched over to the native TLC/QLC mode.


Does this mean that in some cases damaged QCL "sectors" could be still used as a SCL?


> Nowadays SLC and MLC are barely used because TLC is good enough for nearly all purposes.

This is very interesting. Thank you.



Very interesting. I would have thought the overhead from the memory controller would negate all savings, but I know very little about modern cell design.


> overhead from the memory controller

If you want to do it in a single step, you need 8 analogic comparators at the output of the memory, and one level of "and" and "or" gates to solve each bit.

Most ADCs use a single comparator + OpAmp and convert the value in 3 steps. But that would make your memory slower.

Either way, the task of converting it does not fall over the controller.


Voltages are always measure relative to each other. In ops example -3V to +3V has 6V difference just as 0V to 6V does and the arcing is the same.

Op didn’t specify any particular voltage but you should get the example. You need more voltage between the highest and lower states to differentiate the signals compared to binary. It can work well but only in circuits where there’s already very low leakage (flash mentioned as another reply is a great example).


While true, being negative in a semiconductor system is very relevant though because any P-N junction is a diode. So your ability to propagate current (and thus downstream voltages) does depend on the voltages all pointing the right direction.

note: I am aware that strictly speaking a CPU isn't transistors, but it is a lot of variously doped silicon still.


Yes, but then you have to use a lot more complex electronics and production tolerances, as now you'd need to either distribute voltage reference for intermediate level all over the board, which essentially makes it exactly same system as with negative voltage, but with the third wire becoming ground; the same concept but worse implementation, or make circuits able able to discriminate between two different levels, this will be both difficult in terms of implementing the circuit, and will also lead to enormous energy waste, as part of your transistors will have to be half open (jinda similar similar to ECL logic, but worse).


That's my understanding as well. Voltages are relative, you are free to choose a "ground" and work with negatives or not if you want.


Practically it is convenient I think if your ground is third little round prong on the power cord.

I wonder if this is why they suggested a negative voltage. Even though voltages are secretly relative under the hood, it seems like it could simplify things to have two directionally different voltages.


Many reasons. For example, using negative voltage will reduce DC component in the wires, that will improve reliability over long lines, as now all you need is to sense the polarity of the signal, not the level. You'd also need high power reference voltage (for "1") wire going all over the board, which will be nasty polluted with uncorrelated switching noise, will sag in uncorellated way with respect to the "2" (Vcc wire) etc.


Well, this is stuff I read 40 years ago about tech nearly 30 years prior!


They might not have had the third prong back then :)


Except in the analog world its not so clear, you can't just say +3V=1. What if its 3.7V? or 4.5V? Early tools weren't that accurate either so you needed more range to deal with it.


It should be stated as ranges for clarity. It’s never an absolute voltage (quantum mechanics itself won’t round that nicely). Although this is also true of binary. >x volts = 1 otherwise 0 volt n binary. Same thing in ternary just with 3 ranges.


Yeah at some point you have to deal with the transition between the clean conceptual world of digit computing and deal with the fact a circuit can't instantly transition between 0v and 3.3v/5v/whatever level your signal operates at.


> At a certain point the - and + circuits cross and arc and create a fire.

That's not unique to ternary circuits. That's just how voltage differential of any kind works.

The trick is figuring out how many states you can reliably support below the maximum voltage differential the material supports. As we reach the physical limits of miniaturization, "two states" is almost certainly not going to remain the optimal choice.


I am extremely doubtful about your last claim; is there work being done in that direction that you can point to? Don't get me wrong, it would really be exciting if we could get actual efficiencies by increasing the number of states, but all the experts I have talked to so far are very pessimistic about the possibility. The problems introduced by ternary circuits seem to offset any claimed efficiency.


Does MLC flash count as a limited example?


I think this sentence from the Space Race Wikipedia is funny:

> The conclusion of Apollo 11 is regarded by many Americans as ending the Space Race with an American victory.


"Winning the space race" is a rather woolly concept and depends on your definitions. Although NASA did land astronauts on the moon, the Soviets had firsts in most of the main areas relevant today (first satellite, first astronaut, first space station, first landing on another planet, etc etc.).

https://en.m.wikipedia.org/wiki/Timeline_of_the_Space_Race


It's easier to win the space race if you are allowed to define your own winning criterion/criteria. :-)


Don't disagree with you, but, so far, the US is the only country to land people on the moon - and they first pulled that feat off 55 years ago!

Of course, it's not clear whether they could pull that feat off right now, today, but they did pull it off and nobody else has. Suddenly nobody remembered the Soviet accomplishments of first satellite in space, first man in space and so forth. All they remember and care about is first man to the moon.

After all, America does a great job of marketing herself! :)


America landing on the moon signaled the end of the "space race". The soviets could have pushed more money into it and landed one of their cosmonauts on the moon, but they just gave up because to them "the race" was lost, and not worth putting so much more money to come in second-place. All their other "firsts" were like trial/qualifying runs for the big race of landing on the moon.


That's pretty much the way I understand it. I'm sure the Soviets could have landed a cosmonaut on the moon in the early-to-mid 70s, but why bother? It would actually look lame! Hey! We can put men on the moon, too!

There is an effort to put the first woman on the moon, but as far as I can tell, the US is the only country concerned with that. Too bad, because that title has been there for the taking for over 50 years now.


The key is to pick a winning criteria that is sufficiently meaningful looking and then really sell the heck out of it.

There were lots of good arguments for the Soviet space program having been ahead, lots of landmarks they hit first. But, ya know, the dude standing on the moon couldn’t hear these arguments.


> The key is to pick a winning criteria that is sufficiently meaningful looking and then really sell the heck out of it.

Exactly. :-)


1st to anything is considered a significant metric, it's simply picking the larger tasks and working your way down to possibility to find effective goals.


> At a certain point the - and + circuits cross and arc and create a fire.

To do binary logic we do CMOS. The reason CMOS gets hot is because the complementary transistors don't switch at the same time. So, at a certain point, the Vss and Vdd circuits connect and create a massive current drain.

> The binary circuits kept getting miniaturized.

Sure.. but they're not getting much faster.


There are three loss mechanisms for CMOS. a) Leakage b) Crossing current c) Ohmic losses because of currents required to charge/discharge capacitances (of the gates etc..)

Pretty sure c) dominates for high frequency/low power applications like CPUs, as it's quadratic.


I think it is yet another "bears walking on redsqare" level of claim (I mean about ternary systems). There was only one minor ternary computer produced by USSR ("Setun"); it has never been a big thing.


SETUN itself was an electronically binary machine that used bit pairs to encode ternary digits[1].

In support of your point, of the Soviet computers surveyed in the cited article, six were pure binary, two used binary-coded decimal numerics, and only SETUN was ternary[2].

[1] [Willis p. 149]https://dl.acm.org/doi/pdf/10.1145/367149.1047530#page=19

[2] [Willis p. 144]https://dl.acm.org/doi/pdf/10.1145/367149.1047530#page=14

[Willis] Willis H. Ware. 1960. Soviet Computer Technology–1959. Commun. ACM 3, 3 (March 1960), 131–166. DOI:https://doi.org/10.1145/367149.1047530


Complete fabrication, with blatant disregard for physics and electronics.

Many modern CPUs use different voltage levels for certain components, and everything works fine.


>> Many modern CPUs use different voltage levels for certain components, and everything works fine.

But none of them use more than 2 states. If you've got a circuit at 0.9V or one at 2.5V they both have a single threshold (determined by device physics) that determines the binary 1 or 0 state and voltages tend toward 0 or that upper supply voltage. There is no analog or level-based behavior. A transistor is either on or off - anything in the middle has resistance and leads to extra power dissipation.


As mentioned by another comment, NAND has multiple voltage levels.

   - Single-level cell (SLC) flash: One bit per cell, two possible voltage states
   - Multi-level cell (MLC) flash: Two bits per cell, four possible voltage states
   - Triple-level cell (TLC) flash: Three bits per cell, eight possible voltage states
   - Quad-level cell (QLC) flash: Four bits per cell, 16 possible voltage states


NAND flash is so-named because of its physical resemblance to a NAND gate, but I don't think it actually functions as a NAND gate.

Put another way, is it possible to feed two 16-level signals (X and Y) into a QLC and get a 16-level result back out of it (Z), where Z = X NAND Y, and if so, is it significantly faster, smaller, or less power-hungry than 4 conventional NAND gates running in parallel? I don't think so.

As it stands, NAND flash cells are only used for storage, and that's because of their high information density, not any computational benefits. Once the signals leave the SSD, they've already been converted to binary.


> is it significantly faster, smaller, or less power-hungry than 4 conventional NAND gates running in parallel?

It's not implemented any one of these several manners just for the hell of it. Everything has tradeoffs (which vary with each manufacturing node, architecture and timing in the market price landscape). The engineering and product management teams are not throwing darts. Most of the time anyway.

Saying that it's still binary because you feed binary into the chip and get binary back is moving the goal post (imo). A multilevel subsystem in a larger one is still multilevel, an analog one is still analog, an optical one is still optical (see switching fabric recently.)

So anyway, the russian systems did explore what could be done. The flash storage does answer some favorable tradeoff. And so have countless university projects. Including analog neural net attempts.


The second half of that question is not relevant if the answer to the first half of the question is "no" (it wasn't rhetorical). QLC "NAND" is not a logic gate, it is a storage mechanism. It does not compute anything.

Modern magnetic hard drives use all sorts of analog tricks to increase information density but few would seriously argue that this constitutes a fundamentally different kind of computing.

Yes, QLC (etc.) is an innovation for data storage (with tradeoffs). No, it is not a non-binary computer hiding in plain sight.


Fair enough. Common examples in storage and transmission. Not so common in computing for now. The closest obvious computing example, to my mind, is analog neural net blocks meant to be integrated in digital (binary) systems. Not ternary, old school (hah!) analog.


NAND flash has multiple different charge levels. This doesn't necessarily require different voltage levels.


Isn't high speed signalling full of examples for multi level (as in, more-than-two level) signals? PCI-E's gen 6 and the various wireless standards come to mind.


At least as things stand now, these signals are only used when absolutely necessary, and no real work is done on them directly. Transmitting many bits in parallel was the original way of doing this, and would still be preferred if feasible, but timing issues arise at modern speeds over long distances (10s of cm). So the signals on one board, which are still binary and parallel, are multiplexed into a multi-level serial signal before transmission, transmitted over the serial data lines, and then received and demultiplexed on the other board from multi-level serial back into binary parallel signals. All computation (logic and arithmetic) operates on the binary signals, not the multi-level signals.


In limited, specialized domains where big chunky (comparatively) transceivers let you abuse "actually analog" naturę of the medium for higher speeds.

Not for internal logic.


> various wireless standards

Electromagnetic waves do not interact with one another, so it is difficult to build a transistor with it. There's some research into optical transistors but doesn't seem to work well yet.


Another comment suggest that NAND chips use 8 different voltages / states to encode 3 bits just in voltage states.


Yes, and there is considerable electronics required in the readout logic to make sure that they are identified correctly, including error correction. That's not practicable for anything but storage or maybe off-chip/high-speed connections.


Actually, I have to correct myself. Charge levels, not voltage levels, and you do not necessarily need to have multiple voltage levels on the driving side to set these charge levels, you can do it by varying the write time.


For data storage, not computation.


I don't really understand that distinction?

What makes data storage inherently different on the gate level?

Solving that issue has some characteristics that makes multistate voltage a good choice, but sure, that is the circumstances under whish you would use it.


Not agreeing with the parent post, but the different domains in modern electronics only work because they're (nominally) isolated except for level crossing circuits.


The metric they use for "efficiency" seems rather arbitrary and looks like a theoretical mathematicians toy idea. Unfortunately real computers have to be constructed from real physical devices, which have their own measures of efficiency. These don't match.


"It's more efficient because um... the written expression is shorter"

Wait until the hear about base Googol


> A binary logic system can only answer “yes” or “no.”

Maybe I'm missing something, but this sounds like a silly argument for ternary. A ternary system seems like it would be decidedly harder to build a computer on top of. Control flow, bit masking, and a mountain of other useful things are all predicated on boolean logic. At best it would be a waste of an extra bit (or trit), and would also introduce ambiguity and complexity at the lowest levels of the machine, where simplicity is paramount.

But again, maybe I'm missing something. I'd be super interested to read about those soviet-era ternary systems the author mentioned.


I don't see anything fundamentally obvious about this (chip design and arch background). If you look at chip photographs, you see massive amounts of space dedicated to wiring compared to the "logic cells" area, if you include the routing in between the logic cell rows - if you want to look at it this way for compute vs interconnect. Nicely regular, full custom datapaths exist, but so do piles of standard cells. And massive amount of space dedicated to storage-like functions (registers, cache, prediction, renaming, whatever.) If you could 1) have logic cells that "do more" and are larger, 2) less wiring because denser usage of the same lines, 3) denser "memory" areas - well that would be a LOT! So, not saying it's an obvious win. It's not. But it's worth considering now and then. At this level the speed of conversion between binary and multi-level becomes critical also - but it's not so slow that it obviously can't fit the need.


Speaking of compute density, do people do multi-bit standard cells these days? In their standard cell libraries?

One thing we were trying way back then was standard cells that integrated one flip flop or latch and some logic function into one cell. To trade a slightly larger cell (and many more different cells) for less wiring in the routing channel.


I'm not saying it is competitive or practical but optical multi-level storage exists.

https://www.nature.com/articles/nphoton.2015.182


> Control flow, bit masking, and a mountain of other useful things are all predicated on boolean logic. At best it would be a waste of an extra bit, and would also introduce ambiguity and complexity at the lowest levels of the machine, where simplicity is paramount.

There is an even bigger mountain of useful things predicated on ternary logic waiting to be discovered. "Tritmasks" would be able to do so much more than bitmasks we are used to as there would be one more state to assign a meaning to. I'm not sure if the implementation complexity is something we can ever overcome, but if we did I'm sure there would eventually be a Hacker's Delight type of book filled with useful algorithms that take advantage of ternary logic.


Yes, I am sure, that's why ternary logic is such a widely studied math field compared to boolean logic /s. No really, can you give an example where ternary logic is actually considerably more useful than the log(3)/log(2) factor of information density?


I don't know of any concrete examples of "tritwise tricks" (this is not my area of expertise). But as ternary logic is a superset of boolean logic there are more possibilities available (3*3=9 different state transitions compared to 2*2=4) and some of them are bound to be useful. For example it should be possible to represent combinations of two bitwise operations as an equivalent tritwise operation (eg. x XOR A OR B where A and B are constants in binary could become x "XOROR" C in ternary) – but that feels like an example constructed by someone who is still thinking in binary. I'm certain that someone much smarter than me could come up with ternary-native data types and algorithms.

If ternary logic has not been widely studied I assume there is a lot to be discovered still.


Boolean logic is somewhat unintuitive already, I mean we have whole college courses about it.

> At best it would be a waste of an extra bit (or trit), and would also introduce ambiguity and complexity at the lowest levels of the machine, where simplicity is paramount.

This seems backwards to me. It isn’t a “waste” of a bit, because it doesn’t use bits, it is the addition of a third state. It isn’t ambiguous, it is just a new convention. If you look at it through the lens of binary computing it seems more confusing than if you start from scratch, I think.

It might be more complex, hardware-wise though.


> I mean we have whole college courses about it.

Doesn't this have more to do with the fact that it's not part of the standard math curriculum taught at the high school level? I'm no math wiz and discrete math was basically a free A when I took it in college. The most difficult part for me was memorizing the Latin (modus ponens, modus tollens - both of which I still had to lookup because I forgot them beyond mp, mt).

Being a college course doesn't imply that it's hard, just that it's requisite knowledge that a student is not expected to have upon entering university.


I agree with you - I found the course to be a snooze fest but I had delayed taking it til my ~3rd year of school.


its been a while since I read some of this book and enjoying it, and I remember it refering to 3bit computing in the soviet era but it might be right up your street https://mitpress.mit.edu/9780262534666/how-not-to-network-a-...


I would think there wouldn't be much of a difference because the smallest unit you can really work with on modern computers is the byte. And whether you use 8 bits to encode a byte (with 256 possible values) or 5 trits (with 243 possible values), shouldn't really matter?


3 fewer lanes for the same computation. FWIW 8bits is the addressable unit. Computers work with 64bits today, they actually mask off computation to work with 8bits. A ternary computer equivalent would have 31trits (the difference is exponential - many more bits only adds a few trits). That means 31 conductors for the signal and 31 adders in the alu rather than 64. The whole cpu could be smaller with everything packed closer together enabling lower power and faster clock rates in general. Of course ternary computers have more states and the voltage differences between highest and lowest has to be higher to allow differentiation and then this causes more leakage which is terrible. But the actual bits vs trits itself really does matter.


> A ternary computer equivalent would have 31trits

I think you mean 41, not 31 (3^31 is about a factor of 30000 away from 2^64).

The difference in the number of trits/bits is not exponential, it's linear with a factor of log(2)/log(3) (so about 0.63 trits for every bit, or conversely 1.58 bits for every trit).

> ternary computers have more states and the voltage differences between highest and lowest has to be higher to allow differentiation and then this causes more leakage which is terrible

Yes -- everything else being equal, with 3 states you'd need double the voltage between the lowest and highest states when compared to 2 states.

Also, we spent the last 50 years or so optimizing building MOSFETs (and their derivatives) for 2 states. Adding a new constraint of having a separate stable state *between* two voltage levels is another ball game entirely, especially at GHz frequencies.


> because the smallest unit you can really work with on modern computers is the byte

Absolutely not, look e.g. at all the SIMD programming where bit manipulation is paramount.


> Surprisingly, if you allow a base to be any real number, and not just an integer, then the most efficient computational base is the irrational number e.

Now I'm left with an even more interesting question. Why e? The wikipedia page has some further discussion, hinting that the relative efficiency of different bases is a function of the ratio of their natural logarithms.


The "area" that you want to minimize is the number of digits, d, times the base, b.

    A = d * b
d is roughly equal to the log of the number represented, N, base b.

    d ~= ln(N)/ln(b)
Substituting,

    A ~= b * ln(N) / ln(b)
Take the derivative of the area with respect to b and find where the derivative is zero to find the minimum. Using the quotient rule,

    dA/db = ln(N) * (ln(b)*1 - b/b) / ln(b)^2

    0 = ln(N) * (ln(b) - 1) / ln(b)^2

    0 = ln(b) - 1

    ln(b) = 1

    b = e
I hope I got that right. Doing math on the internet is always dangerous.


> Doing math on the internet is always dangerous.

Only if you see correctness as a thing to be avoided. In my experience, being Wrong On The Internet is the fastest way to get something proofread.


Although the cost function has a multiplication of a base times the floor of the log of the value with respect to that base plus one, area is a misleading analogy to describe the applicability as any geometric dimensional value has to taken with respect to a basis. For a visual, (directional) linear scaling is more in line so to say.


A related point is comparing x^y vs y^x, for 1 < x < y.

It can be easily shown that the "radix economy" described in the article is identical to this formulation by simply taking the logarithm of both expressions (base 10 as described in the article, but it doesn't really matter, as it's just a scaling factor; this doesn't change the inequality since log x is monotonically increasing for x > 0): y log x vs x log y. Or, if you want to rearrange the terms slightly to group the variables, y / log y vs x / log x. (This doesn't change the direction of the inequality, as when restricted to x > 1, log x is always positive.) If you minimize x / log x for x > 1, then you find that this minimum value (i.e. best value per digit) is achieved at x=e.

(Choosing the base = e for calculation purposes: take a derivative and set to zero -- you get (ln x - 1) / (ln x)^2 = 0 => ln x - 1 = 0 => ln x = 1 => x = e.)

For some intuition:

For small x and y, you have that x^y > y^x (consider, for instance, x=1.1 and y=2 -- 1.1^2 = 1.21, vs 2^1.1 is about 2.14). But when x and y get large enough, you find the exact opposite (3^4 = 81 is larger than 4^3 = 64).

You might notice that this gets really close for x=2 and y=3 -- 2^3 = 8, which is just barely smaller than 3^2 = 9. And you get equality in some weird cases (x=2, y=4 -- 2^4 = 4^2 = 16 is the only one that looks nice; if you consider 3, its pairing is roughly 2.47805).

It turns out that what really matters is proximity to e (in a weird sense that's related to the Lambert W function). You can try comparing e^x to x^e, or if you want, just graph e^x - x^e and observe that's greater than 0 for x != e.

https://www.wolframalpha.com/input?i=min+e%5Ex-x%5Ee


> To see why, consider an important metric that tallies up how much room a system will need to store data. You start with the base of the number system, which is called the radix, and multiply it by the number of digits needed to represent some large number in that radix. For example, the number 100,000 in base 10 requires six digits. Its “radix economy” is therefore 10 × 6 = 60. In base 2, the same number requires 17 digits, so its radix economy is 2 × 17 = 34. And in base 3, it requires 11 digits, so its radix economy is 3 × 11 = 33. For large numbers, base 3 has a lower radix economy than any other integer base.

I thought that was interesting so I made (well, Claude 3.5 Sonnet made) a little visualization, plotting the radix efficiency of different bases against a range of numbers:

https://paulsmith.github.io/radix-efficiency/radix_effciency...


Base 4 is surprisingly competitive, but of course never better than base 2. Base 5 is the highest base that could stand at the pareto frontier, but just once and then never more.


Iam feeling extremely uncomfortable seeing people in this thread being absolutely unfamiliar wrt basic electronics and basic CS fundamentals.

Ternary system has very limited Energy efficiency benefit compared to binary - roughly 1.5 more efficient and a lot more difficult to trasmit over differential lines. Today the latter is a big concern.


I would like to become more familiar with such things, but my CS education was lacking in this regard. It was almost entirely geared towards programming, and none of these things come up in my career.

I suspect this is widespread.


> A system using ternary logic can give one of three answers. Because of this, it requires only one query: “Is x less than, equal to, or greater than y?”

So what does the answer to that query look like? I get a trit which is -1 for x < y, 0 for x == y, and 1 for x > y? It would make more sense for the query to be "Is x less than or equal to y?" so that I can get back a true/false value and jump to the next instruction as appropriate.

This of course raises a lot of questions as to how to program a ternary computer.


The 'if' construct (or equivalently, conditional jump) is inherently tied to binary. On a ternary computer, the natural conditional would have three branches, not two.


In some ways this seems like it might be very useful. I can easily see an if-statement with true/false/error cases, or less-than/equal-to/greater-than cases.


What percentage of real-world if cases would benefit from it, in the sense that the three branches are actually different. I would say it's a small number. Yes, there are range checks which could benefit, but more often than not, the two extreme ranges are treated the same, as an error condition.

My guess is that most hot conditional jumps are loop returns. That's certainly binary.


This article reads as something trying to make the case for ternary without knowing anything about it.

Being able to store 0-8 numbers in 2 trits instead of 0-7 numbers in 3 bits is not a value added.

The comparison (</=/>) is the only real advantage they mentioned, but the way that might provide an advantage is if it's a single trit register within a primarily binary system. I.e. your (binary) comparison instruction (comparing two binary numbers) drops the result in the single-trit register, and then your (binary) jump instructions jump to a (binary) location based on that trit register. Point is, there's no benefit to having the entire system be ternary just for one ternary value. This is a semantic benefit, because the operation really does have 3 possible outcomes, but there are potential energy efficiency downsides to this.

A bigger benefit might be representing unknowns or don't-cares--notably numbers in this system are still binary, but some bits are unknown or don't matter. In this case, you can actually make some energy efficiency gains, especially in the context of don't cares, because you can simply jam whatever voltage is most convenient into that "bit"--it doesn't even have to match a voltage range. But I'm not entirely sure that it's accurate to call those systems "ternary".


Is you want more insight in what happens these days see also crazy modulation schemes like XPIC (cross-polarization interference cancelling) which is something like polarization-division multiplexing, and countless phase and amplitude transmission modulation schemes (MLT-3, up to 1024-QAM). Yes, "computation" is not done in that medium (except, really, for the complex demodulation process) but still in that field. Everyone is pushing hard to use what physical substrate and ingenuity they have on hand.

Binary computing if anything has a massive advantage in the accumulation of technology underlying it. Hard to beat. But still worth considering it now and then as it might be bringing you an advantage in one function or another. Currently within a chip, obviously storage density and transmission density - which doesn't mean you shouldn't keep it in mind for other functions.


The benefits of ternary computing are best with demonstrated. A simple example:

A waiter walks up to a table of two and asks “is everybody ready to order?”. One of them responds, “I’m not sure”. Then the other says, “Now we are”.

(Edit: I didn’t really ask a question, so it may not seem like a riddle. To some people, when they imagine it, this scene makes perfect sense, so it’s not much of a riddle to them. To others, it’s sort of bizarre, and so the “question” - how is this possible, or what happened - is obvious. Then you can consider the answer. In any case, this is a very simple demonstration of ternary logic, and much more complicated riddles exist that all more or less build off of the same mechanism.)


> One of them responds, “I’m not sure”.

In practice this answer is hardly different from saying Yes, since it's understood that the person saying Yes is just speaking for themselves and the waiter will wait for the other person to confirm.


Isn’t that the entire point, that it means “I’m ready but I don’t know about the other person”? If the first person was not ready they would say “no” because they know that they can’t possibly both be ready. Since the first person says “I’m not sure” and not “no” the second person can infer that the first person is ready, and since the second person is ready they can answer yes for both of them.


This point is brought up a lot, but it doesn’t account for all the details. The second person says “now we are”, immediately after the first said “I don’t know”. The “in practice” example explains the first statement, but not both of them.


Base e is optimal under a certain metric, and 3 is closest to e.

https://en.m.wikipedia.org/wiki/Non-integer_base_of_numerati...


Thought I would link to the Wikipedia page for ternary computers[0].

[0]: https://en.wikipedia.org/wiki/Ternary_computer


Binary is a local maximum in terms of entropy vs complexity. Going from one state per element to two states per element is an increase from 0 to 1 bits: an ∞% increase. Going from two to three states increases it to 1.585 bits per element, or a 59% increase.

Cheeky math aside, getting signalling logic to work with ternary, especially when clock rates go so high the signal starts to look less like square wave pulses and almost more like a modulated sine (due to slew), you now need much more sophisticated decoders.


Yes and / but you now have "long distance" cross-chip busses where driving and receiving from that internal network is part of the cost of doing business.


Base-4 (in the form of PAM-4) is already used for high speed data transmission. But IMO given how "analog" digital signals become at high speed, using anything but binary for compute logic seems like a fruitless battle.

https://blog.samtec.com/post/understanding-nrz-and-pam4-sign...


No discussion of tristate logic?

Tristate logic is how computer busses have been made for about the last half century.

I'd hardly call USB unpopular ...


Good point.

Claim: Three-state logic can be advantageous in a digital system with a shared bus. The third state, Hi-Z, effectively disconnects a device from the bus. It allows multiple devices to share a single line efficiently.

Does the above claim overlook anything fundamental about where tristate logic shines?

There are alternative ways to design buses, but my EE-fu is not particularly current (pun intended). If others want to weigh in, please go ahead... e.g. open drain, multiplexing, TDMA, etc.


I2C remains open collector today.

The issue is that open collector is slow. It's hard to make a bus function above 1MHz and it won't go much beyond that. Sufficient for your typical temperature/ADC/whatever peripheral even today but 1Mhz is not very fast.

10kOhm pull-up will pull the bus back up to 3.3V or whatever your I2C operates at.

Let's say each device on the bus adds 15pF of capacitance.

At 6 devices, we are at 10kOhm * 6 * 15pF * 2.2 rise time == 2,000,000 ps rise time, aka 2us rise time aka 500kHz or somewhere in that neighborhood. (2.2 * RC constant gives the 80% rise time of a resistor-capacitor system. Capacitances in parallel add together. One 10kOhm pull-up resistor).

-------

TriState buffers fix that by allowing a far greater drive of current into the bus.

Current is the easiest way to fight back against fan-out * capacitance. Every chip will add parasitic capacitance and it actually adds up to real issues in practice.


The third state there is indeterminate and a read during that period is equally likely to produce a 0 or a 1.

Logic /levels/ are what matters, not bus state.

You can even see this in how it's implemented. You typically have an OUT line, and an OUT_ENABLE line. This is really just 2 bits.


Electricity has no 'Logic Levels' to it. It's all voltages and currents.

If DeviceA tries to set a line to 5V while DeviceB tries to set the same line to 0V, it doesn't matter what you think is going on in abstract computer land. There is likely some magic smoke getting let out somewhere.

------

EEs paper over these details by defining lines to be OpenCollector (ex: 8051 chips) where high voltage 5V is driven high by a 10kOhm pull-up resistor but otherwise no one trying to push currents (8051 has only uA of current push supported).

But pull-down is strong. 8051 had IIRC 2mA of pulldown, sufficient to 'Beat' the 10kOhm resistor (which is pushing maybe 20uA).

--------

TriState logic allows for greater currents to push onto the line. You can now push 2mA or 10mA of current to drive the line to 5V which leads to faster communication.

Just with TriState, because of the greater currents involved, you must be aware of the fire hazard and try not to have a 5V drive at the same time as someone else's 0V.

------

EEs will pretend that these voltages and currents are 0 or 1 for the computer programmers. But there is actually a lot going on here.

This is why TTL vs CMOS vs RTL vs Diode Logic vs whatever is a thing. EEs are calculation the currents and voltages necessary to build up the 0 vs 1 abstraction. But it only works within the specs of electricity / logic chosen. TTL has different specs than RTL has different specs than CMOS.


> Electricity has no 'Logic Levels' to it.

Yes, but transceivers do.

> This is why TTL vs CMOS vs RTL vs Diode Logic vs whatever is a thing.

Yes, and they all have /different/ logic levels.

> the specs of electricity / logic chosen

Yes, what one might call the "logic levels."


The most important difference between TTL and CMOS is the currents involved.

I don't exactly remember the specification of TTL, but let's say it was around 1mA or at least in that magnitude.

If you had a Billion transistors (such as I dunno... A 1-Gbit chip), then that's a billion * 1mA or about a Mega-Amp of current.

It's physically impossible to run TTL current specs to build a computer today. That's why we switched to CMOS which has ~femtoamps of current instead.

--------------

When every chip draws 1mA or so just to figure out on vs off, it's very difficult to build an efficient system.

---------

5V isn't sufficient to define on vs off in TTL logic. Only if that 5V signal also had enough current to turn on a BJT do you actually get an 'On' value.

And once we start thinking currents AND Voltages, life gets a bit more complex.


Both TTL and CMOS logic define voltage levels, not currents. That is enough, btw, because if you can't drive the current you need at 5V, you will not be at 5V....

You can build TTL compatible logic with NPN/PNP -- that's where the name originally comes from. But it's hard to build CMOS compatible logic with NPN/PNP, tpyically the high output isn't high enough for CMOS spec. It is not hard to build TTL compatible inputs with CMOS technology, though. For example the 74HCT series, which has TTL compatible inputs, but is CMOS technology, compared to the 74HC series. CMOS outputs are always compatible with TTL inputs (assuming same supply voltage).


Npn and pnp transistors only turn on with a reasonably sized current. It's not like MOSFETs that can turn on even with microamps.

Any NPN or PNP logic (such as TTL) in lately have current requirements.


> The most important difference between TTL and CMOS is the currents involved.

I've never heard that before. It's always been volts, it's defined in volts, every chart you find is going to specify them in volts.

> specification of TTL, but let's say it was around 1mA

Different families of TTL have different "drive strengths" and different "pin current capacity." Drive parameters are not universal to a family.

> It's physically impossible to run TTL current specs to build a computer today. That's why we switched to CMOS which has ~femtoamps of current instead.

CMOS uses complementary switching. It's in the name. Once the pair is stable nothing other than leakage current flows. You can create complimentary arrangements of transistors from any given family.

If you want to then express the output of CMOS over a large line with a lot of capacitance, you'll probably just use a buffer driver sized for that line, so the actual current in your CMOS arrangement is entirely irrelevant.

> Only if that 5V signal also had enough current to turn on a BJT do you actually get an 'On' value.

Yea, then if you strap a Schottky diode into that BJT, you get something which needs far less current yet still works on the same voltage levels. So we tend to think of the voltage as the logic level and the drive current as an incidental property of the construction technique.

> And once we start thinking currents AND Voltages, life gets a bit more complex.

Yea, this is why the logic levels are an acceptable /range/ of voltages, so that we don't actually have to address this complexity. The whole point is to create a digital circuit anyways.

In any case, tri-stating a bus does not imply you have a third logic level, whether or not you want to consider that a potential or a current. They imply that you are neither pulling up or down your line. So if you read that line the value you get will be /indeterminate/, if you read it successively, the value may change apparently at random.

This has been a very long and mostly unnecessary detour of misunderstandings around the original point.


> In any case, tri-stating a bus does not imply you have a third logic level, whether or not you want to consider that a potential or a current.

The Z state of tristate logic is literally implemented as zero current (or practically zero: nanoamps or something).

To understand why zero current is important in TriState logic, you'll need to think more carefully about the interaction of voltages AND currents in circuits.

I've tried to give you some hints already. But seriously, think about what currents mean, and why a BJT requires at least some current to operate, and the implications in how circuits work.

You can go look at any 7400 TTL chip and you will ABSOLUTELY see current specifications. If you never noticed that's somewhat your fault for not seeing it. But currents AND Voltages make electricity. If you are electrically engineering something, you MUST think of both.

-------

In any case, the point is moot as proper TTL circuits are completely obsolete by CMOS today.

But go through any proper BJT based SN7400 or whatever chip documentation and think why all those current values are specified everywhere.


Soon scientists will decide that base 10 beats base 3 and those ancient Egyptians were right the whole time.


So it’s memory efficient but you’d need a new type of “transistor” to support this


I have heard somewhere it is theoretically better because it is much closer to e (2.718... natural logarithm base). Anyone have an explanation that includes that as to why it is better?


Wikipedia has a nice article on this: https://en.wikipedia.org/wiki/Non-integer_base_of_numeration

Edit: layer8 below has a much better link


This one is more to the point I think: https://en.wikipedia.org/wiki/Optimal_radix_choice


You are absolutely right, your link is the way to go.



That one's good, the cost metric for figure 2 I think is what I had seen.


No, it's better because it's closer to pi (3.14...). The reason for this is that pi can be used to measure the area of a circle and circles are the most divine shape.


Has there ever been any real hardware that was ternary based?


There were some good attempts at it back in the day:

https://en.wikipedia.org/wiki/Setun


Not in the article but (pulling this out of my ass here) I wonder if ternary logic is ever so slightly more quantum resistant since modeling ternary systems using a "natural" spin-1 (-1, 0, 1) system is way less accessible vs spin-1/2 systems which "naturally" model binary operations


> Why didn’t ternary computing catch on? The primary reason was convention .. No

This article is honestly worthless, it doesn't address the actual challenges & strengths of moving to ternary or a different base system. Binary is superior mostly because the components are way more reliable and therefor can be made much much smaller for the same price.

Trust me if Nvidia or Snapdragon could make a chip or even a whole prebuild computer working in base 3 that was 5% more efficient for the same price they ABSOLUTLY would and would get rich doing it.


except we have no space efficient transistor equivalent. what would that even look like?


I've always wonder if our own human intelligence is limited by the language we speak and the base number system we use.

E.g.

Tonal languages allows individuals to express way more than Latin based languages.

Sumerians used a Base-60 numbering system, and were exceedingly advanced in mathematics.

EDIT:

Why the downvotes?


> Tonal languages allows individuals to express way more than Latin based languages.

Not true. There was a study that showed that information density is pretty consistent across all languages, regardless of the average number of phonemes used in that language or the strategies that are employed to encode structural information like syntax. I can only assume you are refering to the density with your statement based on the subject matter in that article as well as the fact that, given enough time and words, any language can represent anything any other language can represent.

I apologise if my terms are not exact, it's been years since I've studied linguistics. In addition, since I'm not going to dig up a reference to that paper, my claim here is just heresay. However the finding lines up with a pretty much all linguistic theory I learned regarding language acquisition and production as well as theory on how language production and cognition are linked so I was confident in the paper's findings even though a lot of the theory went over my head.


Language density is one thing but what about legibility?

How achievable is literacy?


> Tonal languages allows individuals to express way more than Latin based languages.

Do you have any evidence of this? I've never heard this claim before.


Do you speak a tonal language?

Vietnamese is one example. Having tones attached to syllables means that words and sentences are shorter. In fact, the grammar is very logical and efficient compared to baroque European ones, especially of the slavic/baltic flavour.

But. The same mechanism in Indo-European languages is used for intonation. we express sarcasm, irony, etc this way be essentially using a single sentence tone.

I have some good Vietnamese friends explaining how hard it is for them to use (and hear) a sentence-wide tone. So, say, some of the fluently speaking Russian Vietnamese always sound like they are sarcastic.

Otoh, I always had problems ordering rice with pork in hanoi...


> Vietnamese is one example. Having tones attached to syllables means that words and sentences are shorter.

This does not entail that more can be expressed than other languages. Please see my other reply which goes into (admittedly only slightly) more detail.


Yes, sure, "expressability" is something that is hard to quantify.

Otoh, there should br some connection between grammar complexity and written culture. It is my hypothesis but, say, culture with a rich novel writing tradition leads to a complication of the language grammar. A 3 page long sentence, anyone? One can see how Middle Egyptian literature made the underlying language more complex.


> Sumerians used a Base-60 numbering system, and were exceedingly advanced in mathematics.

I've sort of thought 60 might be a nice base to work in. Some mathematicians prefer base-12 because it works with a wide number of factors and base-60 is just adding a factor of 5 to base-12. The result would be a base with an even wider variety of factors and, ideally, fewer instances of fractional amounts.


You are the first here is one effort to free us with a new language.

http://www.loglan.org/

Downvotes happen no reason needed.


There's always Ithkuil - though I hear it's a bugger to learn: https://en.wikipedia.org/wiki/Ithkuil


Its characters are beautiful. I would like to see a kanban integration of Ithkul.

I bet that could be expressive if not legible.


Like DNA?

(CTG)


*in theory


A basic exercise in mathematical logic involves looking for minimal sets of connectives. You can base logic on just NAND. But you get NOT by joining two inputs together, so you might refuse to count that and say it is NOT and AND. Another possibility for binary logic is implication, =>, and NOT. Because (NOT A) => B is equivalent to A OR B and obviously (OR and NOT) work just as well as (AND and NOT). What though are minimal sets for ternary?

The familiar 74181 has control inputs to let the circuit summon any of the 16 operations on two bits. (A two bit operation has four possible inputs, and two possible outputs for each input so 2 ^ 4 = 16 possible operations). A similar ALU part for ternary would have two trits as input so nine possible inputs. Each would have three possible values, so 3 ^ 9 = 19683. It would need nine control inputs, each a trit. But 19683 is so much more than 16. How many gates do you need to be able to combine them to produce all possible (trit,trit)->trit gates?

Here are three gates that get you a long way mathematically. First a swap gate

-1 -> 0

0 -> -1

1 -> 1

Then a permute gate

-1 -> 0

0 -> 1

1 -> -1

This lets your generate all permutations

Then you need a "spot" gate that spots one input, say (0 0) and output 1, and 0 otherwise. Since you have all permutations you have all spot gates. And inverse spot gates. Does that solve the problem? It is late and I'm tired. And I can see that the mathematically minimal set is impractical.

https://commons.wikimedia.org/wiki/File:Balanced_ternary_ope...

has six operations. There are three for doing arithmetic, the units from multiplication, the units from addition, and the carry from addition (it is part of the attraction of balanced ternary that there is no carry out from multiplying digits). There are also three for doing logic, based on -1 is false, +1 is true, and basically degenerating into binary logic.

Is that how this is done? The Instruction Set Architecture (ISA) has two kinds of words, numbers in balanced ternary, and bit-rows in degenerate binary. The hardware is built with a pragmatic mixture of binary and ternary logic gates, with the ternary gates heavily used in the Arithmetic Unit, barely at all in the Logic unit, and only opportunistically in the rest of the hardware, that is mostly binary gates as usual?

I have a bizarre mad science interest in this because I see how to build AND gates with mechanical linkages, http://alan.sdf-eu.org/linkage-logic/now-with-labels.html and wonder if the same technique works for ternary gates. But what are the operation tables for ternary gates? I should attempt the specialized ones for arithmetic first.


The 74181 has four signals to select the operation (S0 to S3) but also two modifiers. M=1 selects logic functions and M=0 selects math functions (which will be slightly different depending on the value of carry in). So there are 3x16 = 48 possible operations, though they are not all distinct.

https://en.wikipedia.org/wiki/74181




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: