Is a double really unsuitable for money?

C#Language AgnosticDecimalCurrency

C# Problem Overview


I always tell in c# a variable of type double is not suitable for money. All weird things could happen. But I can't seem to create an example to demonstrate some of these issues. Can anyone provide such an example?

(edit; this post was originally tagged C#; some replies refer to specific details of decimal, which therefore means System.Decimal).

(edit 2: I was specific asking for some c# code, so I don't think this is language agnostic only)

C# Solutions


Solution 1 - C#

Very, very unsuitable. Use decimal.

double x = 3.65, y = 0.05, z = 3.7;
Console.WriteLine((x + y) == z); // false

(example from Jon's page here - recommended reading ;-p)

Solution 2 - C#

You will get odd errors effectively caused by rounding. In addition, comparisons with exact values are extremely tricky - you usually need to apply some sort of epsilon to check for the actual value being "near" a particular one.

Here's a concrete example:

using System;

class Test
{
    static void Main()
    {
        double x = 0.1;
        double y = x + x + x;
        Console.WriteLine(y == 0.3); // Prints False
    }
}

Solution 3 - C#

Yes it's unsuitable.

If I remember correctly double has about 17 significant numbers, so normally rounding errors will take place far behind the decimal point. Most financial software uses 4 decimals behind the decimal point, that leaves 13 decimals to work with so the maximum number you can work with for single operations is still very much higher than the USA national debt. But rounding errors will add up over time. If your software runs for a long time you'll eventually start losing cents. Certain operations will make this worse. For example adding large amounts to small amounts will cause a significant loss of precision.

You need fixed point datatypes for money operations, most people don't mind if you lose a cent here and there but accountants aren't like most people..

edit
According to this site http://msdn.microsoft.com/en-us/library/678hzkk9.aspx Doubles actually have 15 to 16 significant digits instead of 17.

@Jon Skeet decimal is more suitable than double because of its higher precision, 28 or 29 significant decimals. That means less chance of accumulated rounding errors becoming significant. Fixed point datatypes (ie integers that represent cents or 100th of a cent like I've seen used) like Boojum mentions are actually better suited.

Solution 4 - C#

Since decimal uses a scaling factor of multiples of 10, numbers like 0.1 can be represented exactly. In essence, the decimal type represents this as 1 / 10 ^ 1, whereas a double would represent this as 104857 / 2 ^ 20 (in reality it would be more like really-big-number / 2 ^ 1023).

A decimal can exactly represent any base 10 value with up to 28/29 significant digits (like 0.1). A double can't.

Solution 5 - C#

My understanding is that most financial systems express currency using integers -- i.e., counting everything in cents.

IEEE double precision actually can represent all integers exactly in the range -2^53 through +2^53. (Hacker's Delight, pg. 262) If you use only addition, subtraction and multiplication, and keep everything to integers within this range then you should see no loss of precision. I'd be very wary of division or more complex operations, however.

Solution 6 - C#

Using double when you don't know what you are doing is unsuitable.

"double" can represent an amount of a trillion dollars with an error of 1/90th of a cent. So you will get highly precise results. Want to calculate how much it costs to put a man on Mars and get him back alive? double will do just fine.

But with money there are often very specific rules saying that a certain calculation must give a certain result and no other. If you calculate an amount that is very very very close to $98.135 then there will often be a rule that determines whether the result should be $98.14 or $98.13 and you must follow that rule and get the result that is required.

Depending on where you live, using 64 bit integers to represent cents or pennies or kopeks or whatever is the smallest unit in your country will usually work just fine. For example, 64 bit signed integers representing cents can represent values up to 92,223 trillion dollars. 32 bit integers are usually unsuitable.

Solution 7 - C#

No a double will always have rounding errors, use "decimal" if you're on .Net...

Solution 8 - C#

Actually floating-point double is perfectly well suited to representing amounts of money as long as you pick a suitable unit.

See http://www.idinews.com/moneyRep.html

So is fixed-point long. Either consumes 8 bytes, surely preferable to the 16 consumed by a decimal item.

Whether or not something works (i.e. yields the expected and correct result) is not a matter of either voting or individual preference. A technique either works or it doesn't.

Attributions

All content for this solution is sourced from the original question on Stackoverflow.

The content on this page is licensed under the Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) license.

Content TypeOriginal AuthorOriginal Content on Stackoverflow
QuestiondoekmanView Question on Stackoverflow
Solution 1 - C#Marc GravellView Answer on Stackoverflow
Solution 2 - C#Jon SkeetView Answer on Stackoverflow
Solution 3 - C#MendeltView Answer on Stackoverflow
Solution 4 - C#Richard PooleView Answer on Stackoverflow
Solution 5 - C#BoojumView Answer on Stackoverflow
Solution 6 - C#gnasher729View Answer on Stackoverflow
Solution 7 - C#Thomas HansenView Answer on Stackoverflow
Solution 8 - C#Conrad WeisertView Answer on Stackoverflow