Why binary and not ternary computing? [closed]
Solution 1:
It is much harder to build components that use more than two states/levels/whatever. For example, the transistors used in logic are either closed and don't conduct at all, or wide open. Having them half open would require much more precision and use extra power. Nevertheless, sometimes more states are used for packing more data, but rarely (e.g. modern NAND flash memory, modulation in modems).
If you use more than two states you need to be compatible to binary, because the rest of the world uses it. Three is out because the conversion to binary would require expensive multiplication or division with remainder. Instead you go directly to four or a higher power of two.
These are practical reasons why it is not done, but mathematically it is perfectly possible to build a computer on ternary logic.
Solution 2:
Lots of misinformation here. Binary has a simple on/off switch. Trinary/Ternary can use one of 2 modes: Balanced aka -1, 0, +1, or unbalanced 0, 1, 2, but is not simply on or off, or more correctly, has 2 "on" states.
With the expansion of fiber optics and expansive hardware, ternary would actually take us to a much more expansive and faster state for a much lower cost. Modern coding could still be used (much like 32 bit software is still able to be used on 64 bit hardware) in combination with newer ternary codes, at least initially. Just need the early hardware to check which piece of info coming through, or the software to announce ahead of time if it is a bit or a trit. Code could be sent through 3 pieces at a time instead of the modern 2 for the same or less power.
With fiber optic hardware, instead of the modern on/off binary process, it would be determined by 0=off and the other 2 switches as orthogonal polarizations of light. As for security, this could actually be made massively more secure for the individual as each PC or even user is set to a specific polarization "specs" that is only to be sent/received between the user and the destination. The same would go for the "gates" with other hardware. They would not need to be bigger, just have the option for 3 possibilities instead of 2.
There has even been some theories and even possibly starting some tests on the Josephson Effect which would allow for ternary memory cells, using circulating superconducting currents, either clockwise, counterclockwise, or off.
When compared directly, Ternary is the integer base with the highest radix economy, followed closely by binary and quaternary. Even some modern systems use a type of ternary logic, aka SQL which implements ternary logic as a means of handling NULL field content. SQL uses NULL to represent missing data in a database. If a field contains no defined value, SQL assumes this means that an actual value exists, but that the value is not currently recorded in the database. Note that a missing value is not the same as either a numeric value of zero, or a string value of zero length. Comparing anything to NULL—even another NULL—results in an UNKNOWN truth state. For example, the SQL expression "City = 'Paris'" resolves to FALSE for a record with "Chicago" in the City field, but it resolves to UNKNOWN for a record with a NULL City field. In other words, to SQL, an undefined field represents potentially any possible value: a missing city might or might not represent Paris. This is where trinary logic is used with modern day binary systems, albeit crude.
Solution 3:
Of course we'd be able to hold more data per bit, just like our decimal number system can hold far more data in a single digit.
But that also increases complexity. Binary behaves very nicely in many cases, making it remarkably simple to manipulate. The logic for a binary adder is far simpler than one for ternary numbers (or for that matter, decimal ones).
You wouldn't magically be able to store or process more information. The hardware would have to be so much bigger and more complex that it'd more than offset the larger capacity.
Solution 4:
A lot of it has to do with the fact that ultimately, bits are represented as electrical impulses, and it's easier to build hardware that simply differentiates between "charged" and "no charge", and to easily detect transitions between states. A system utilizing three states has to be a bit more exact in differentiating between "charged", "partly charged", and "no charge". Besides that, the "charged" state is not constant in electronics: the energy starts to "bleed" eventually, so a "charged" state varies in actual "level" of energy. In a 3-state system, this would have to be taken into account, too.
Solution 5:
Well, for one thing, there is no smaller unit of information than a bit. operating on bits is the most basic and fundamental way of treating information.
Perhaps a stronger reason is because its much easier to make electrical components that have two stable states, rather than three.
Aside: Your math is a bit off. there are approximately 101.4 binary digits in a 64 digit trinary number. Explanation: the largest 64 digit trinary number is 3433683820292512484657849089280 (3^64-1). to represent this in binary, it requires 102 bits: 101011010101101101010010101111100011110111100100110010001001111000110001111001011111101011110100000000
This is easy to understand, log2(3^64) is about 101.4376