PostgreSQL: Which Datatype should be used for Currency?
Seems like Money
type is discouraged as described here
My application needs to store currency, which datatype shall I be using? Numeric, Money or FLOAT?
Your source is in no way official. It dates to 2011 and I don't even recognize the authors. If the money type was officially "discouraged" PostgreSQL would say so in the manual - which it doesn't.
For a more official source, read this thread in pgsql-general (from just this week!), with statements from core developers including D'Arcy J.M. Cain (original author of the money type) and Tom Lane:
Related answer (and comments!) about improvements in recent releases:
- Jasper Report: unable to get value for field 'x' of class 'org.postgresql.util.PGmoney'
Basically, money
has its (very limited) uses. The Postgres Wiki suggests to largely avoid it, except for those narrowly defined cases. The advantage over numeric
is performance.
decimal
is just an alias for numeric
in Postgres, and widely used for monetary data, being an "arbitrary precision" type. The manual:
The type
numeric
can store numbers with a very large number of digits. It is especially recommended for storing monetary amounts and other quantities where exactness is required.
Personally, I like to store currency as integer
representing Cents if fractional Cents never occur (basically where money makes sense). That's more efficient than any other of the mentioned options.
Numeric with forced 2 units precision. Never use float or float like datatype to represent currency because if you do, people are going to be unhappy when the financial report's bottom line figure is incorrect by + or - a few dollars.
The money type is just left in for historical reasons as far as I can tell.
Take this as an example: 1 Iranian Rial equals 0.000030 United States Dollars. If you use fewer than 5 fractional digits then 1 IRR will be rounded to 0 USD after conversion. I know we're splitting rials here, but I think that when dealing with money you can never be too safe.
Your choices are:
-
bigint
: store the amount in cents. This is what EFTPOS transactions use. -
decimal(12,2)
: store the amount with exactly two decimal places. This what most general ledger software uses. -
float
: terrible idea - inadequate accuracy. This is what naive developers use.
Option 2 is the most common and easiest to work with. Make the precision (12 in my example, meaning 12 digits in all) as large or small as works best for you.
Note that if you are aggregating multiple transactions that were the result of a calculation (eg involving an exchange rate) into a single value that has business meaning, the precision should be higher to provide a accurate macro value; consider using something like decimal(18, 8)
so the sum is accurate and the individual values can be rounded to cent precision for display.
Use a 64-bit integer stored as bigint
I recommend using micro-dollars (or similar major currency). Micro means 1 millionth so 1 micro-dollar = $0.000001.
- Simple to use and compatible with every language.
- Enough precision to handle fractions of a cent.
- Works for very small per-unit pricing (like ad impressions or API charges).
- Smaller data size for storage than strings or numerics.
- Easy to maintain accuracy through calculations and apply rounding at the final output.
I keep all of my monetary fields as:
numeric(15,6)
It seems excessive to have that many decimal places, but if there's even the slightest chance you will have to deal with multiple currencies you'll need that much precision for converting. No matter what I'm presenting a user, I always store to US Dollar. In that way I can readily convert to any other currency, given the conversion rate for the day involved.
If you never do anything but one currency, the worst thing here is that you wasted a bit of space to store some zeroes.