Solution 1:

CGRect frame = CGRectMake(0.0f, 0.0f, 320.0f, 50.0f);

uses float constants. (The constant 0.0 usually declares a double in Objective-C; putting an f on the end - 0.0f - declares the constant as a (32-bit) float.)

CGRect frame = CGRectMake(0, 0, 320, 50);

uses ints which will be automatically converted to floats.

In this case, there's no (practical) difference between the two.

Solution 2:

When in doubt check the assembler output. For instance write a small, minimal snippet ie like this

#import <Cocoa/Cocoa.h>

void test() {
  CGRect r = CGRectMake(0.0f, 0.0f, 320.0f, 50.0f);
  NSLog(@"%f", r.size.width);
}

Then compile it to assembler with the -S option.

gcc -S test.m

Save the assembler output in the test.s file and remove .0f from the constants and repeat the compile command. Then do a diff of the new test.s and previous one. Think that should show if there are any real differences. I think too many have a vision of what they think the compiler does, but at the end of the day one should know how to verify any theories.

Solution 3:

Sometimes there is a difference.

float f = 0.3; /* OK, throw away bits to convert 0.3 from double to float */
assert ( f == 0.3 ); /* not OK, f is converted from float to double
   and the value of 0.3 depends on how many bits you use to represent it. */
assert ( f == 0.3f ); /* OK, comparing two floats, although == is finicky. */

Solution 4:

It tells the computer that this is a floating point number (I assume you are talking about c/c++ here). If there is no f after the number, it is considered a double or an integer (depending on if there is a decimal or not).

3.0f -> float
3.0 -> double
3 -> integer

Solution 5:

The f that you are talking about is probably meant to tell the compiler that it's working with a float. When you omit the f, it is usually translated to a double.

Both are floating point numbers, but a float uses less bits (thus smaller and less precise) than a double.