In C, sizeof operator returns 8 bytes when passing 2.5m but 4 bytes when passing 1.25m * 2
I do not understand why the sizeof
operator is producing the following results:
sizeof( 2500000000 ) // => 8 (8 bytes).
... it returns 8, and when I do the following:
sizeof( 1250000000 * 2 ) // => 4 (4 bytes).
... it returns 4, rather than 8 (which is what I expected). Can someone clarify how sizeof
determines the size of an expression (or data type) and why in my specific case this is occurring?
My best guess is that the sizeof
operator is a compile-time operator.
Bounty Question: Is there a run time operator that can evaluate these expressions and produce my expected output (without casting)?
Solution 1:
2500000000
doesn't fit in an int
, so the compiler correctly interprets it as a long
(or long long
, or a type where it fits). 1250000000
does, and so does 2
. The parameter to sizeof
isn't evaluated, so the compiler can't possibly know that the multiplication doesn't fit in an int
, and so returns the size of an int
.
Also, even if the parameter was evaluated, you'd likely get an overflow (and undefined behavior), but probably still resulting in 4
.
Here:
#include <iostream>
int main()
{
long long x = 1250000000 * 2;
std::cout << x;
}
can you guess the output? If you think it's 2500000000
, you'd be wrong. The type of the expression 1250000000 * 2
is int
, because the operands are int
and int
and multiplication isn't automagically promoted to a larger data type if it doesn't fit.
http://ideone.com/4Adf97
So here, gcc says it's -1794967296
, but it's undefined behavior, so that could be any number. This number does fit into an int
.
In addition, if you cast one of the operands to the expected type (much like you cast integers when dividing if you're looking for a non-integer result), you'll see this working:
#include <iostream>
int main()
{
long long x = (long long)1250000000 * 2;
std::cout << x;
}
yields the correct 2500000000
.
Solution 2:
[Edit: I did not notice, initially, that this was posted as both C and C++. I'm answering only with respect to C.]
Answering your followup question, "Is there anyway to determine the amount of memory allocated to an expression or variable at run time?": well, not exactly. The problem is that this is not a very well formed question.
"Expressions", in C-the-language (as opposed to some specific implementation), don't actually use any memory. (Specific implementations need some code and/or data memory to hold calculations, depending on how many results will fit into CPU registers and so on.) If an expression result is not stashed away in a variable, it simply vanishes (and the compiler can often omit the run-time code to calculate the never-saved result). The language doesn't give you a way to ask about something it doesn't assume exists, i.e., storage space for expressions.
Variables, on the other hand, do occupy storage (memory). The declaration for a variable tells the compiler how much storage to set aside. Except for C99's Variable Length Arrays, though, the storage required is determined purely at compile time, not at run time. This is why sizeof x
is generally a constant-expression: the compiler can (and in fact must) determine the value of sizeof x
at compile time.
C99's VLAs are a special exception to the rule:
void f(int n) {
char buf[n];
...
}
The storage required for buf
is not (in general) something the compiler can find at compile time, so sizeof buf
is not a compile-time constant. In this case, buf
actually is allocated at run time and its size is only determined then. So sizeof buf
is a runtime-computed expression.
For most cases, though, everything is sized up front, at compile time, and if an expression overflows at run-time, the behavior is undefined, implementation-defined, or well-defined depending on the type. Signed integer overflow, as in 2.5 billion multiplied by 2, when INT_MAX
is just a little over 2.7 billion, results in "undefined behavior". Unsigned integers do modular arithmetic and thus allow you to calculate in GF(2k).
If you want to make sure some calculation cannot overflow, that's something you have to calculate yourself, at run time. This is a big part of what makes multiprecision libraries (like gmp) hard to write in C—it's usually a lot easier, as well as faster, to code big parts of that in assembly and take advantage of known properties of the CPU (like overflow flags, or double-wide result-register-pairs).
Solution 3:
Luchian answered it already. Just for complete it..
C11 Standard states (C++ standard has similar lines) that the type of an integer literal with no suffix to designating the type is dertermined as follows:
From 6.4.4 Constants (C11 draft):
Semantics
4 The value of a decimal constant is computed base 10; that of an octal constant, base 8; that of a hexadecimal constant, base 16. The lexically first digit is the most significant.
5 The type of an integer constant is the first of the corresponding list in which its value can be represented.
And the table is as follows:
Decimal Constant
int
int long int
long long int
Octal or Hexadecimal Constant
int
unsigned int
long int
unsigned long int
long long int
unsigned long long int
For Octal and Hexadecimal constants, even unsigned types are possible. So depending on your platform whichever in the above list (int or long int or long long int) fits first (in the order) will be the type of integer literal.