What is the difference between signed and unsigned variables?

I have seen these mentioned in the context of C and C++, but what is the difference between signed and unsigned variables?


Solution 1:

Signed variables, such as signed integers will allow you to represent numbers both in the positive and negative ranges.

Unsigned variables, such as unsigned integers, will only allow you to represent numbers in the positive and zero.

Unsigned and signed variables of the same type (such as int and byte) both have the same range (range of 65,536 and 256 numbers, respectively), but unsigned can represent a larger magnitude number than the corresponding signed variable.

For example, an unsigned byte can represent values from 0 to 255, while signed byte can represent -128 to 127.

Wikipedia page on Signed number representations explains the difference in the representation at the bit level, and the Integer (computer science) page provides a table of ranges for each signed/unsigned integer type.

Solution 2:

While commonly referred to as a 'sign bit', the binary values we usually use do not have a true sign bit.

Most computers use two's-complement arithmetic. Negative numbers are created by taking the one's-complement (flip all the bits) and adding one:

      5 (decimal) -> 00000101 (binary)
      1's complement: 11111010
      add 1: 11111011 which is 'FB' in hex


This is why a signed byte holds values from -128 to +127 instead of -127 to +127:

      1 0 0 0 0 0 0 0 = -128
      1 0 0 0 0 0 0 1 = -127
          - - -
      1 1 1 1 1 1 1 0 = -2
      1 1 1 1 1 1 1 1 = -1
      0 0 0 0 0 0 0 0 = 0
      0 0 0 0 0 0 0 1 = 1
      0 0 0 0 0 0 1 0 = 2
          - - -
      0 1 1 1 1 1 1 0 = 126
      0 1 1 1 1 1 1 1 = 127
      (add 1 to 127 gives:)
      1 0 0 0 0 0 0 0   which we see at the top of this chart is -128.


If we had a proper sign bit, the value range would be the same (e.g., -127 to +127) because one bit is reserved for the sign. If the most-significant-bit is the sign bit, we'd have:

      5 (decimal) -> 00000101 (binary)
      -5 (decimal) -> 10000101 (binary)

The interesting thing in this case is we have both a zero and a negative zero:
      0 (decimal) -> 00000000 (binary)
      -0 (decimal) -> 10000000 (binary)


We don't have -0 with two's-complement; what would be -0 is -128 (or to be more general, one more than the largest positive value). We do with one's complement though; all 1 bits is negative 0.

Mathematically, -0 equals 0. I vaguely remember a computer where -0 < 0, but I can't find any reference to it now.

Solution 3:

Signed variables use one bit to flag whether they are positive or negative. Unsigned variables don't have this bit, so they can store larger numbers in the same space, but only nonnegative numbers, e.g. 0 and higher.

For more: Unsigned and Signed Integers

Solution 4:

Unsigned variables can only be positive numbers, because they lack the ability to indicate that they are negative.

This ability is called the 'sign' or 'signing bit'.

A side effect is that without a signing bit, they have one more bit that can be used to represent the number, doubling the maximum number it can represent.