How to represent currency or money in C
TL;DR1 What is an accurate and maintainable approach for representing currency or money in C?
Background to the question:
This has been answered for a number of other languages, but I could not find a solid answer for the C language.
C# What data type should I use to represent money in C#?
Java Why not use Double or Float to represent currency?
Objective-C How to represent money in Objective-C / iOS?
Note: There's plenty more similar questions for other languages, I just pulled a few for representational purposes.
All of those questions can be distilled down to "use a decimal
data type" where the specific type may vary based upon the language.
There is a related question that ends up suggesting using a "fixed point" approach, but none of the answers address using a specific data type in C.
Likewise, I have looked at arbitrary precision libraries such as GMP, but it's not clear to me if this is the best approach to use or not.
Simplifying Assumptions:
Presume an x86 or x64 based architecture, but please call out any assumptions that would impact a RISC based architecture such as a Power chip or an Arm chip.
Accuracy in calculations is the primary requirement. Ease of maintenance would be the next requirement. Speed of calculations is important, but is tertiary to the other requirements.
Calculations need to be able to safely support operations accurate to the mill as well as supporting values ranging up to the trillions (10^9)
Differences from other questions:
As noted above, this type of question has been asked before for multiple other languages. This question is different from the other questions for a couple of reasons.
Using the accepted answer from: Why not use Double or Float to represent currency?, let's highlight the differences.
(Solution 1) A solution that works in just about any language is to use integers instead, and count cents. For instance, 1025 would be $10.25. Several languages also have built-in types to deal with money. (Solution 2) Among others, Java has the BigDecimal class, and C# has the decimal type.
Emphasis added to highlight the two suggested solutions
The first solution is essentially a variant of the "fixed point" approach. There is a problem with this solution in that the suggested range (tracking cents) is insufficient for mill based calculations and significant information will be lost on rounding.
The other solution is to use a native decimal
class which is not available within C.
Likewise, the answer doesn't consider other options such as creating a struct for handling these calculations or using an arbitrary precision library. Those are understandable differences as Java doesn't have structs and why consider a 3rd party library when there is native support within the language.
This question is different from that question and other related questions because C doesn't have the same level of native type support and has language features that the other languages don't. And I haven't seen any of the other questions address the multiple ways that this could be approached in C.
The Question:
From my research, it seems that float
is not an appropriate data type to use to represent currency within a C program due to floating point error.
What should I use to represent money in C, and why is that approach better than other approaches?
1This question started in a shorter form, but feedback received indicated the need to clarify the question.
Never use floating point for storing currency. Floating point numbers cannot represent tenths or hundredths, only diadic rationals, i.e. numbers of the form p/q where p and q are integers and q is a power of 2. Thus, any attempt to represent cents others than 0, 25, 50, or 75 cents will require an approximation, and these approximations translate into vulnerabilities that can be exploited to make you lose money.
Instead, store integer values in cents (or whatever the smallest division of the currency is). When reading values formatted with a radix point, simply read the whole currency units and cents as separate fields, then multiply by 100 (or the appropriate power of 10) and add.
The best money/currency representation is to use a higher enough precision floating point type like double
that has FLT_RADIX == 10
. These platforms/compliers are rare1 as the vast majority of systems have FLT_RADIX == 2
.
Four alternatives: integers, non-decimal floating point, special decimal floating point, user defined structure.
Integers: A common solution uses the integer count of the smallest denomination in the currency of choice. Example counting US cents instead of dollars. The range of integers needs to be reasonable wide. Something like long long
instead of int
as int
may only handle about +/- $320.00. This works fine for simple accounting tasks involving add/subtract/multiple but begins to crack with divisions and complex functions as used in interest calculations. Monthly payment formula. Signed integer math has no overflow protection. Care needs to be applied when rounding division results. q = (a + b/2)/b
is not good enough.
Binary floating point: 2 common pitfalls: 1) using float
which is so often of insufficient precision and 2) incorrect rounding. Using double
well addresses problem #1 for many accounting limits. Yet code still often needs to use a round to the desired minimum currency unit for satisfactory results.
// Sample - does not properly meet nuanced corner cases.
double RoundToNearestCents(double dollar) {
return round(dollar * 100.0)/100.0;
}
A variation on double
is to use a double
amount of the smallest unit (0.01 or 0.001). An important advantage is the ability to round simply by using the round()
function which by itself meets corner cases.
Special decimal floating point Some systems provide a "decimal" type other than double
that meets decimal64 or something similar. Although this handles most above issues, portability is sacrificed.1
User defined structure (like fixed-point) of course can solve everything except it is error prone to code so much and it is work (Instead). The result may function perfectly yet lack performance.
Conclusion This is a deep subject and each approach deserves a more expansive discussion. The general answer is: there is no general solution as all approaches have significant weaknesses. So it depends on the specifics of the application.
[Edit]
Given OP's additional edits, recommend using double
number of the smallest unit of currency (example: $0.01 --> double money = 1.0;
). At various points in code whenever an exact value is required, use round()
.
double interest_in_cents = round(
Monthly_payment(0.07/12 /* percent */, N_payments, principal_in_cents));
[Edit 2021]
1C23 targeted to provide decimal based floating point. Wait until then?
My crystal ball says by 2022 the U.S. will drop the $0.01 and the smallest unit will be $0.05. I would use the approach that can best handle that shift.
Either use an integer data type (long long, long, int) or a BCD (binary coded decimal) arithmetic library. You should store tenths or hundredths of the smallest amount you will display. That is, if you are using US dollars and presenting cents (hundredths of a dollar), your numeric values should be integers representing mills or millrays (tenths or hundredths of a cent). The extra significant figures will ensure your interest and similar calculations round consistently.
If you use an integer type, make sure that its range is great enough to handle the amounts of concern.