C# 'unsafe' function — *(float*)(&result) vs. (float)(result)

Can anyone explain in a simple way the codes below:

public unsafe static float sample(){    
      int result = 154 + (153 << 8) + (25 << 16) + (64 << 24);

      return *(float*)(&result); //don't know what for... please explain
}

Note: the above code uses unsafe function

For the above code, I'm having hard time because I don't understand what's the difference between its return value compare to the return value below:

return (float)(result);

Is it necessary to use unsafe function if your returning *(float*)(&result)?


On .NET a float is represented using an IEEE binary32 single precision floating number stored using 32 bits. Apparently the code constructs this number by assembling the bits into an int and then casts it to a float using unsafe. The cast is what in C++ terms is called a reinterpret_cast where no conversion is done when the cast is performed - the bits are just reinterpreted as a new type.

IEEE single precision floating number

The number assembled is 4019999A in hexadecimal or 01000000 00011001 10011001 10011010 in binary:

  • The sign bit is 0 (it is a positive number).
  • The exponent bits are 10000000 (or 128) resulting in the exponent 128 - 127 = 1 (the fraction is multiplied by 2^1 = 2).
  • The fraction bits are 00110011001100110011010 which, if nothing else, almost have a recognizable pattern of zeros and ones.

The float returned has the exact same bits as 2.4 converted to floating point and the entire function can simply be replaced by the literal 2.4f.

The final zero that sort of "breaks the bit pattern" of the fraction is there perhaps to make the float match something that can be written using a floating point literal?


So what is the difference between a regular cast and this weird "unsafe cast"?

Assume the following code:

int result = 0x4019999A // 1075419546
float normalCast = (float) result;
float unsafeCast = *(float*) &result; // Only possible in an unsafe context

The first cast takes the integer 1075419546 and converts it to its floating point representation, e.g. 1075419546f. This involves computing the sign, exponent and fraction bits required to represent the original integer as a floating point number. This is a non-trivial computation that has to be done.

The second cast is more sinister (and can only be performed in an unsafe context). The &result takes the address of result returning a pointer to the location where the integer 1075419546 is stored. The pointer dereferencing operator * can then be used to retrieve the value pointed to by the pointer. Using *&result will retrieve the integer stored at the location however by first casting the pointer to a float* (a pointer to a float) a float is instead retrieved from the memory location resulting in the float 2.4f being assigned to unsafeCast. So the narrative of *(float*) &result is give me a pointer to result and assume the pointer is pointer to a float and retrieve the value pointed to by the pointer.

As opposed to the first cast the second cast doesn't require any computations. It just shoves the 32 bit stored in result into unsafeCast (which fortunately also is 32 bit).

In general performing a cast like that can fail in many ways but by using unsafe you are telling the compiler that you know what you are doing.


If i'm interpreting what the method is doing correctly, this is a safe equivalent:

public static float sample() {    
   int result = 154 + (153 << 8) + (25 << 16) + (64 << 24);

   byte[] data = BitConverter.GetBytes(result);
   return BitConverter.ToSingle(data, 0);
}

As has been said already, it is re-interpreting the int value as a float.