How to add two NSNumber objects?

There is not really a better way, but you really should not be doing this if you can avoid it. NSNumber exists as a wrapper to scalar numbers so you can store them in collections and pass them polymorphically with other NSObjects. They are not really used to store numbers in actual math. If you do math on them it is much slower than performing the operation on just the scalars, which is probably why there are no convenience methods for it.

For example:

NSNumber *sum = [NSNumber numberWithFloat:([one floatValue] + [two floatValue])];

Is blowing at a minimum 21 instructions on message dispatches, and however much code the methods take to unbox the and rebox the values (probably a few hundred) to do 1 instruction worth of math.

So if you need to store numbers in dicts use an NSNumber, if you need to pass something that might be a number or string into a function use an NSNumber, but if you just want to do math stick with scalar C types.


NSDecimalNumber (subclass of NSNumber) has all the goodies you are looking for:

– decimalNumberByAdding:
– decimalNumberBySubtracting:
– decimalNumberByMultiplyingBy:
– decimalNumberByDividingBy:
– decimalNumberByRaisingToPower:

...

If computing performance is of interest, then convert to C++ array std::vector or like.

Now I never use C-Arrays anymore; it is too easy to crash using a wrong index or pointer. And very tedious to pair every new [] with delete[].


You can use

NSNumber *sum = @([first integerValue] + [second integerValue]);

Edit: As observed by ohho, this example is for adding up two NSNumber instances that hold integer values. If you want to add up two NSNumber's that hold floating-point values, you should do the following:

NSNumber *sum = @([first floatValue] + [second floatValue]);

The current top-voted answer is going to lead to hard-to-diagnose bugs and loss of precision due to the use of floats. If you're doing number operations on NSNumber values, you should convert to NSDecimalNumber first and perform operations with those objects instead.

From the documentation:

NSDecimalNumber, an immutable subclass of NSNumber, provides an object-oriented wrapper for doing base-10 arithmetic. An instance can represent any number that can be expressed as mantissa x 10^exponent where mantissa is a decimal integer up to 38 digits long, and exponent is an integer from –128 through 127.

Therefore, you should convert your NSNumber instances to NSDecimalNumbers by way of [NSNumber decimalValue], perform whatever arithmetic you want to, then assign back to an NSNumber when you're done.

In Objective-C:

NSDecimalNumber *a = [NSDecimalNumber decimalNumberWithDecimal:one.decimalValue]
NSDecimalNumber *b = [NSDecimalNumber decimalNumberWithDecimal:two.decimalValue]
NSNumber *result = [a decimalNumberByAdding:b]

In Swift 3:

let a = NSDecimalNumber(decimal: one.decimalValue)
let b = NSDecimalNumber(decimal: two.decimalValue)
let result: NSNumber = a.adding(b)