Using %f to print an integer variable

Solution 1:

From the latest C11 draft:

§7.16.1.1/2

...if type is not compatible with the type of the actual next argument 
(as promoted according to the default argument promotions), the behavior 
is undefined, except for the following cases:

— one type is a signed integer type, the other type is the corresponding 
unsigned integer type, and the value is representable in both types;
— one type is pointer to void and the other is a pointer to a character type.

Solution 2:

The most important thing to remember is that, as chris points out, the behavior is undefined. If this were in a real program, the only sensible thing to do would be to fix the code.

On the other hand, looking at the behavior of code whose behavior is not defined by the language standard can be instructive (as long as you're careful not to generalize the behavior too much).

printf's "%f" format expects an argument of type double, and prints it in decimal form with no exponent. Very small values will be printed as 0.000000.

When you do this:

int x=10;
printf("%f", x);

we can explain the visible behavior given a few assumptions about the platform you're on:

  • int is 4 bytes
  • double is 8 bytes
  • int and double arguments are passed to printf using the same mechanism, probably on the stack

So the call will (plausibly) push the int value 10 onto the stack as a 4-byte quantity, and printf will grab 8 bytes of data off the stack and treat it as the representation of a double. 4 bytes will be the representation of 10 (in hex, 0x0000000a); the other 4 bytes will be garbage, quite likely zero. The garbage could be either the high-order or low-order 4 bytes of the 8-byte quantity. (Or anything else; remember that the behavior is undefined.)

Here's a demo program I just threw together. Rather than abusing printf, it copies the representation of an int object into a double object using memcpy().

#include <stdio.h>
#include <string.h>

void print_hex(char *name, void *addr, size_t size) {
    unsigned char *buf = addr;
    printf("%s = ", name);
    for (int i = 0; i < size; i ++) {
        printf("%02x", buf[i]);
    }
    putchar('\n');
}

int main(void) {
    int i = 10;
    double x = 0.0;
    print_hex("i (set to 10)", &i, sizeof i);
    print_hex("x (set to 0.0)", &x, sizeof x);

    memcpy(&x, &i, sizeof (int));
    print_hex("x (copied from i)", &x, sizeof x);
    printf("x (%%f format) = %f\n", x);
    printf("x (%%g format) = %g\n", x);

    return 0;
}

The output on my x86 system is:

i (set to 10) = 0a000000
x (set to 0.0) = 0000000000000000
x (copied from i) = 0a00000000000000
x (%f format) = 0.000000
x (%g format) = 4.94066e-323

As you can see, the value of the double is very small (you can consult a reference on the IEEE floating-point format for the details), close enough to zero that "%f" prints it as 0.000000.

Let me emphasize once again that the behavior is undefined, which means specifically that the language standard "imposes no requirements" on the program's behavior. Variations in byte order, in floating-point representation, and in argument-passing conventions can dramatically change the results. Even compiler optimization can affect it; compilers are permitted to assume that a program's behavior is well defined, and to perform transformations based on that assumption.

So please feel free to ignore everything I've written here (other than the first and last paragraphs).

Solution 3:

Because an integer 10 in binary looks like this:

00000000 00000000 00000000 00001010

All printf does is take the in-memory representation and try to present it as an IEEE 754 floating point number.

There are three parts to a floating point number (from MSB to LSB):

The sign: 1 bit
The exponent: 8 bits
The mantissa: 23 bits

Since an integer 10 is just 1010 in the mantissa bits, its a very tiny number that is much less than the default precision of printf's floating point format.

Solution 4:

The result is not defined.

I am just asking this from a theoretical point of view.

The complete chris's excellent answer:

What happens in your printf is undefined, but it could be quite similar to the code below (it depends on the actual implementation of the varargs, IIRC).

Disclaimer: The following is more "as-if-it-worked-that-way" explanation of what could happen in an undefined behaviour case on one platform than a true/valid description that always happens on all platforms.

Define "undefined" ?

Imagine the following code:

int main()
{
    int i       = 10 ;
    void * pi   = &i ;
    double * pf = (double *) pi ; /* oranges are apples ! */
    double f    = *pf ;

    /* what is the value inside f ? */

    return 0;
}

Here, as your pointer to double (i.e. pf) points to an address hosting an integer value (i.e. i), what you'll get is undefined, and most probably garbage.

I want to see what's inside that memory !

If you really want to see what's possibly behind that garbage (when debugging on some platforms), try the following code where we will use an union to simulate a piece of memory where we will write either double or int data:

typedef union
{
   char c[8] ; /* char is expected to be 1-byte wide     */
   double f ;  /* double is expected to be 8-bytes wide  */
   int i ;     /* int is expected to be 4-byte wide      */
} MyUnion ;

The f and i field are used to set the value, and the c field is used to look at (or modify) the memory, byte by byte.

void printMyUnion(MyUnion * p)
{
   printf("[%i %i %i %i %i %i %i %i]\n"
      , p->c[0], p->c[1], p->c[2], p->c[3], p->c[4], p->c[5], p->c[6], p->c[7]) ;
}

the function above will print the memory layout, byte by byte.

The function below will prinf the memory layout of different types of values:

int main()
{
   /* this will zero all the fields in the union */
   memset(myUnion.c, 0, 8 * sizeof(char)) ;
   printMyUnion(&myUnion) ; /* this should print only zeroes */
                            /* eg. [0 0 0 0 0 0 0 0] */

   memset(myUnion.c, 0, 8 * sizeof(char)) ;
   myUnion.i = 10 ;
   printMyUnion(&myUnion) ; /* the representation of the int 10 in the union */
                            /* eg. [10 0 0 0 0 0 0 0] */

   memset(myUnion.c, 0, 8 * sizeof(char)) ;
   myUnion.f = 10 ;
   printMyUnion(&myUnion) ; /* the representation of the double 10 in the union */
                            /* eg. [0 0 0 0 0 0 36 64] */

   memset(myUnion.c, 0, 8 * sizeof(char)) ;
   myUnion.f = 3.1415 ;
   printMyUnion(&myUnion) ; /* the representation of the double 3.1415 in the union */
                            /* eg. [111 18 -125 -64 -54 33 9 64] */

   return 0 ;
}

Note: This code was tested on Visual C++ 2010.

It doesn't mean it will work that way (or at all) on your platform, but usually, you should get results similar to what happens above.

In the end, the garbage is just the hexadecimal data set in the memory your looking at, but seen as some type.

As most types have different memory representation of the data, looking at the data in any other type than the original type is bound to have garbage (or not-so-garbage) results.

Your printf could well behave like that, and thus, try to interpret a raw piece of memory as a double when it was initially set as an int.

P.S.: Note that as the int and the double have different size in bytes, the garbage gets even more complicated, but it is mostly what I described above.

But I want to print an int as a double!

Seriously?

Helios proposed a solution.

int main()
{
   int x=10;
   printf("%f",(double)(x));
   return 0;
}

Let's look at the pseudo code to see what's being fed to the printf:

   /* printf("...", [[10 0 0 0]]) ; */
   printf("%i",x);

   /* printf("...", [[10 0 0 0 ?? ?? ?? ??]]) ; */
   printf("%f",x);

   /* printf("...", [[0 0 0 0 0 0 36 64]]) ; */
   printf("%f",(double)(x));

The casts offers a different memory layout, effectively changing the integer "10" data into a double "10.0" data.

Thus, when using "%i", it will expect something like [[?? ?? ?? ??]], and for the first printf, receive [[10 0 0 0]] and interpret it correctly as an integer.

When using "%f", it will expect something like [[?? ?? ?? ?? ?? ?? ?? ??]], and receive on the second printf something like [[10 0 0 0]], missing 4 bytes. So the 4 last bytes will be random data (probably the bytes "after" the [[10 0 0 0]], that is, something like [[10 0 0 0 ?? ?? ?? ??]]

In the last printf, the cast changed the type, and thus the memory representation into [[0 0 0 0 0 0 36 64]] and the printf will interpret it correctly as a double.