C# Decimal datatype performance

you can use the long datatype. Sure, you won't be able to store fractions in there, but if you code your app to store pennies instead of pounds, you'll be ok. Accuracy is 100% for long datatypes, and unless you're working with vast numbers (use a 64-bit long type) you'll be ok.

If you can't mandate storing pennies, then wrap an integer in a class and use that.


You say it needs to be fast, but do you have concrete speed requirements? If not, you may well optimise past the point of sanity :)

As a friend sitting next to me has just suggested, can you upgrade your hardware instead? That's likely to be cheaper than rewriting code.

The most obvious option is to use integers instead of decimals - where one "unit" is something like "a thousandth of a cent" (or whatever you want - you get the idea). Whether that's feasible or not will depend on the operations you're performing on the decimal values to start with. You'll need to be very careful when handling this - it's easy to make mistakes (at least if you're like me).

Did the profiler show particular hotspots in your application that you could optimise individually? For instance, if you need to do a lot of calculations in one small area of code, you could convert from decimal to an integer format, do the calculations and then convert back. That could keep the API in terms of decimals for the bulk of the code, which may well make it easier to maintain. However, if you don't have pronounced hotspots, that may not be feasible.

+1 for profiling and telling us that speed is a definite requirement, btw :)


The question is well discussed but since I was digging this problem for a while I would like to share some of my results.

Problem definition: Decimals are known to be much slower than doubles but financial applications cannot tolerate any artefacts that arise when calculations are performed on doubles.

Research

My aim was to measure different approaches of storing float-pointing numbers and to make a conclusion which one should be used for our application.

If was acceptable for us to use Int64 to store floating point numbers with fixed precision. Multiplier of 10^6 was giving us both: enough digits to store fractions and stil a big range to store large amounts. Of course, you have to be careful whith this approach (multiplication and division operations might become tricky), but we were ready and wanted to measure this approach as well. One thing you have to keep in mind except for possible calculation errors and overflows, is that usually you cannot expose those long numbers to public API. So all internal calculations could be performed with longs but before sending the numbers to the user they should be converted to something more friendly.

I've implemented a simple prototype class that wraps a long value to a decimal-like structure (called it Money) and added it to the measurments.

public struct Money : IComparable
{
    private readonly long _value;

    public const long Multiplier = 1000000;
    private const decimal ReverseMultiplier = 0.000001m;

    public Money(long value)
    {
        _value = value;
    }

    public static explicit operator Money(decimal d)
    {
        return new Money(Decimal.ToInt64(d * Multiplier));
    }

    public static implicit operator decimal (Money m)
    {
        return m._value * ReverseMultiplier;
    }

    public static explicit operator Money(double d)
    {
        return new Money(Convert.ToInt64(d * Multiplier));
    }

    public static explicit operator double (Money m)
    {
        return Convert.ToDouble(m._value * ReverseMultiplier);
    }

    public static bool operator ==(Money m1, Money m2)
    {
        return m1._value == m2._value;
    }

    public static bool operator !=(Money m1, Money m2)
    {
        return m1._value != m2._value;
    }

    public static Money operator +(Money d1, Money d2)
    {
        return new Money(d1._value + d2._value);
    }

    public static Money operator -(Money d1, Money d2)
    {
        return new Money(d1._value - d2._value);
    }

    public static Money operator *(Money d1, Money d2)
    {
        return new Money(d1._value * d2._value / Multiplier);
    }

    public static Money operator /(Money d1, Money d2)
    {
        return new Money(d1._value / d2._value * Multiplier);
    }

    public static bool operator <(Money d1, Money d2)
    {
        return d1._value < d2._value;
    }

    public static bool operator <=(Money d1, Money d2)
    {
        return d1._value <= d2._value;
    }

    public static bool operator >(Money d1, Money d2)
    {
        return d1._value > d2._value;
    }

    public static bool operator >=(Money d1, Money d2)
    {
        return d1._value >= d2._value;
    }

    public override bool Equals(object o)
    {
        if (!(o is Money))
            return false;

        return this == (Money)o;
    }

    public override int GetHashCode()
    {
        return _value.GetHashCode();
    }

    public int CompareTo(object obj)
    {
        if (obj == null)
            return 1;

        if (!(obj is Money))
            throw new ArgumentException("Cannot compare money.");

        Money other = (Money)obj;
        return _value.CompareTo(other._value);
    }

    public override string ToString()
    {
        return ((decimal) this).ToString(CultureInfo.InvariantCulture);
    }
}

Experiment

I measured following operations: addition, subtraction, multiplication, division, equality comparison and relative (greater/less) comparison. I was measuring operations on the following types: double, long, decimal and Money. Each operation was performed 1.000.000 times. All numbers were pre-allocated in arrays, so calling custom code in constructors of decimal and Money should not affect the results.

Added moneys in 5.445 ms
Added decimals in 26.23 ms
Added doubles in 2.3925 ms
Added longs in 1.6494 ms

Subtracted moneys in 5.6425 ms
Subtracted decimals in 31.5431 ms
Subtracted doubles in 1.7022 ms
Subtracted longs in 1.7008 ms

Multiplied moneys in 20.4474 ms
Multiplied decimals in 24.9457 ms
Multiplied doubles in 1.6997 ms
Multiplied longs in 1.699 ms

Divided moneys in 15.2841 ms
Divided decimals in 229.7391 ms
Divided doubles in 7.2264 ms
Divided longs in 8.6903 ms

Equility compared moneys in 5.3652 ms
Equility compared decimals in 29.003 ms
Equility compared doubles in 1.727 ms
Equility compared longs in 1.7547 ms

Relationally compared moneys in 9.0285 ms
Relationally compared decimals in 29.2716 ms
Relationally compared doubles in 1.7186 ms
Relationally compared longs in 1.7321 ms

Conclusions

  1. Addition, subtraction, multiplication, comparison operations on decimal are ~15 times slower than operations on long or double; division is ~30 times slower.
  2. Performance of Decimal-like wrapper is better than performance of Decimal but still significantly worse than performance of double and long due to lack of support from CLR.
  3. Performing calculations on Decimal in absolute numbers is quite fast: 40.000.000 operations per second.

Advice

  1. Unless you have a very heavy calculation case, use decimals. In relative numbers they are slower than longs and doubles, but absolute numbers look good.
  2. There is not much point in re-implementing Decimal with your own structure due to abcense of support from CLR. You might make it faster than Decimal but it will never be as fast as double.
  3. If performance of Decimal is not enough for your application, than you might want consider switching your calculations to long with fixed precision. Before returning the result to the client it should be converted to Decimal.

The problem is basically that double/float are supported in hardware, while Decimal and the like are not. I.e. you have to choose between speed + limited precision and greater precision + poorer performance.