Why don't we have 33-bit CPUs? [closed]

I've seen 12, 14, 15, 17, 18, 20, 24, and 48-bit CPUs. But with modern VLSI technology (or is it ULSI by now?), adding more bits to the data path is not that expensive. Chip developers cram as much width onto the chip as possible, as that increases throughput with relatively little additional cost and with only a slight cycle time penalty.

Achieving more speed/throughput with a narrow data path and faster cycle time is much harder.


Unlike many circumstances in a computer, for instance addressing, where increasing the address length by one bit increases the amount of addressable memory by a power of 2 (and why powers of 2 are so common in memory), the actual word length of the CPU can be any convenient value.

The common word lengths for processors (16, 32, and 64 bits) came about actually as multiples of 8 (rather than powers of 2, although of course these particular multiples of 8 also happen to be powers of 2), 8 bits being the minimum size for a single char, itself the smallest commonly-used primitive data type.

Since 8 bits is itself too imprecise to be very useful for numeric values (or even for extended character sets such as UTF-16), words larger than 8 bits allow for much greater efficiency when working with values utilizing more than that many bits of precision, and multiples of 8 bits (the smallest commonly-used data type) are still the natural choice, allowing one to store an integer quantify of (e.g. 2, 4, or 8) chars in a word without leaving wasted, unused bits.

The wikipedia article on words has a section Word size choice with ever so slightly more detail.