I've a small problem and I can't find a solution!

My code is (this is only a sample code, but my original code do something like this):

float x = [@"2.45" floatValue];


for(int i=0; i<100; i++)
    x += 0.22;

NSLog(@"%f", x);

the output is 52.450001 and not 52.450000 !

I don't know because this happens!

Thanks for any help!

~SOLVED~

Thanks to everybody! Yes, I've solved with the double type!


Solution 1:

Floats are a number representation with a certain precision. Not every value can be represented in this format. See here as well.

You can easily think of why this would be the case: there is an unlimited number of number just in the intervall (1..1), but a float only has a limited number of bits to represent all numbers in (-MAXFLOAT..MAXFLOAT).

More aptly put: in a 32bit integer representation there is a countable number of integers to be represented, But there is an infinite innumerable number of real values that cannot be fully represented in a limited representation of 32 or 64bit. Therefore there not only is a limit to the highest and lowest representable real value, but also to the accuracy.

So why is a number that has little digits after the floating point affected? Because the representation is based on a binary system instead of a decimal, making other numbers easily represented then the decimal ones.

Solution 2:

See http://en.wikipedia.org/wiki/Floating_point#Accuracy_problems

Solution 3:

Floating point numbers can not always be represented easily by computers. This leads to inaccuracy in some digits.

It's like me asking you what 1/3 is in decimal. No matter how hard you try, you're not going to be able to tell me what it is because decimal can't accurately describe that number.

Floats can't accurately describe some decimal numbers.