Why prefer signed over unsigned in C++? [closed]
I'd like to understand better why choose int
over unsigned
?
Personally, I've never liked signed values unless there is a valid reason for them. e.g. count of items in an array, or length of a string, or size of memory block, etc., so often these things cannot possibly be negative. Such a value has no possible meaning. Why prefer int
when it is misleading in all such cases?
I ask this because both Bjarne Stroustrup and Chandler Carruth gave the advice to prefer int
over unsigned
here (approx 12:30').
I can see the argument for using int
over short
or long
- int
is the "most natural" data width for the target machine architecture.
But signed over unsigned has always annoyed me. Are signed values genuinely faster on typical modern CPU architectures? What makes them better?
As per requests in comments: I prefer int
instead of unsigned
because...
it's shorter (I'm serious!)
it's more generic and more intuitive (i. e. I like to be able to assume that
1 - 2
is -1 and not some obscure huge number)what if I want to signal an error by returning an out-of-range value?
Of course there are counter-arguments, but these are the principal reasons I like to declare my integers as int
instead of unsigned
. Of course, this is not always true, in other cases, an unsigned
is just a better tool for a task, I am just answering the "why would anyone prefer defaulting to signed" question specifically.
Let me paraphrase the video, as the experts said it succinctly.
Andrei Alexandrescu:
- No simple guideline.
- In systems programming, we need integers of different sizes and signedness.
- Many conversions and arcane rules govern arithmetic (like for
auto
), so we need to be careful.Chandler Carruth:
- Here's some simple guidelines:
- Use signed integers unless you need two's complement arithmetic or a bit pattern
- Use the smallest integer that will suffice.
- Otherwise, use
int
if you think you could count the items, and a 64-bit integer if it's even more than you would want to count.- Stop worrying and use tools to tell you when you need a different type or size.
Bjarne Stroustrup:
- Use
int
until you have a reason not to.- Use unsigned only for bit patterns.
- Never mix signed and unsigned
Wariness about signedness rules aside, my one-sentence take away from the experts:
Use the appropriate type, and when you don't know, use an
int
until you do know.
Several reasons:
Arithmetic on
unsigned
always yields unsigned, which can be a problem when subtracting integer quantities that can reasonably result in a negative result — think subtracting money quantities to yield balance, or array indices to yield distance between elements. If the operands are unsigned, you get a perfectly defined, but almost certainly meaningless result, and aresult < 0
comparison will always be false (of which modern compilers will fortunately warn you).unsigned
has the nasty property of contaminating the arithmetic where it gets mixed with signed integers. So, if you add a signed and unsigned and ask whether the result is greater than zero, you can get bitten, especially when the unsigned integral type is hidden behind atypedef
.