Adjusting decimal precision, .net

.NetDecimalPrecision

.Net Problem Overview


These lines in C#

decimal a = 2m;
decimal b = 2.0m;
decimal c = 2.00000000m;
decimal d = 2.000000000000000000000000000m;

Console.WriteLine(a);
Console.WriteLine(b);
Console.WriteLine(c);
Console.WriteLine(d);

Generates this output:

2
2.0
2.00000000
2.000000000000000000000000000

So I can see that creating a decimal variable from a literal allows me to control the precision.

  • Can I adjust the precision of decimal variables without using literals?
  • How can I create b from a? How can I create b from c?

.Net Solutions


Solution 1 - .Net

Preserving trailing zeroes like this was introduced in .NET 1.1 for more strict conformance with the ECMA CLI specification.

There is some info on this on MSDN, e.g. http://msdn.microsoft.com/en-us/library/aa289527(VS.71).aspx">here</a>;.

You can adjust the precision as follows:

  • Math.Round (or Ceiling, Floor etc) to decrease precision (b from c)

  • Multiply by 1.000... (with the number of decimals you want) to increase precision - e.g. multiply by 1.0M to get b from a.

Solution 2 - .Net

You are just seeing different representations of the exact same data. The precision of a decimal will be scaled to be as big as it needs to be (within reason).

From System.Decimal:

> A decimal number is a floating-point > value that consists of a sign, a > numeric value where each digit in the > value ranges from 0 to 9, and a > scaling factor that indicates the > position of a floating decimal point > that separates the integral and > fractional parts of the numeric value. > > The binary representation of a Decimal > value consists of a 1-bit sign, a > 96-bit integer number, and a scaling > factor used to divide the 96-bit > integer and specify what portion of it > is a decimal fraction. The scaling > factor is implicitly the number 10, > raised to an exponent ranging from 0 > to 28. Therefore, the binary > representation of a Decimal value is > of the form, ((-296 to 296) / 10(0 to > 28)), where -296-1 is equal to > MinValue, and 296-1 is equal to > MaxValue. > > The scaling factor also preserves any > trailing zeroes in a Decimal number. > Trailing zeroes do not affect the > value of a Decimal number in > arithmetic or comparison operations. > However, trailing zeroes can be > revealed by the ToString method if an > appropriate format string is applied.

Solution 3 - .Net

Solution 4 - .Net

I found that I could "tamper" with the scale by multiplying or dividing by a fancy 1.

decimal a = 2m;
decimal c = 2.00000000m;
decimal PreciseOne = 1.000000000000000000000000000000m;
  //add maximum trailing zeros to a
decimal x = a * PreciseOne;
  //remove all trailing zeros from c
decimal y = c / PreciseOne;

I can fabricate a sufficiently precise 1 to change scale factors by known sizes.

decimal scaleFactorBase = 1.0m;
decimal scaleFactor = 1m;
int scaleFactorSize = 3;

for (int i = 0; i < scaleFactorSize; i++)
{
  scaleFactor *= scaleFactorBase;
}

decimal z = a * scaleFactor;

Solution 5 - .Net

It's tempting to confuse decimal in SQL Server with decimal in .NET; they are quite different.

A SQL Server decimal is a fixed-point number whose precision and scale are fixed when the column or variable is defined.

A .NET decimal is a floating-point number like float and double (the difference being that decimal accurately preserves decimal digits whereas float and double accurately preserve binary digits). Attempting to control the precision of a .NET decimal is pointless, since all calculations will yield the same results regardless of the presence or absence of padding zeros.

Solution 6 - .Net

This will remove all the trailing zeros from the decimal and then you can just use ToString().

public static class DecimalExtensions
{
    public static Decimal Normalize(this Decimal value)
    {
        return value / 1.000000000000000000000000000000000m;
    }
}

Or alternatively, if you want an exact number of trailing zeros, say 5, first Normalize() and then multiply by 1.00000m.

Solution 7 - .Net

The question is - do you really need the precision stored in the decimal, rather than just displaying the decimal to the required precision. Most applications know internally how precise they want to be and display to that level of precision. For example, even if a user enters an invoice for 100 in an accounts package, it still prints out as 100.00 using something like val.ToString("n2").

> How can I create b from a? How can I create b from c?

c to b is possible.

Console.WriteLine(Math.Round(2.00000000m, 1)) 

produces 2.0

a to b is tricky as the concept of introducing precision is a little alien to mathematics.

I guess a horrible hack could be a round trip.

decimal b = Decimal.Parse(a.ToString("#.0"));
Console.WriteLine(b);

produces 2.0

Attributions

All content for this solution is sourced from the original question on Stackoverflow.

The content on this page is licensed under the Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) license.

Content TypeOriginal AuthorOriginal Content on Stackoverflow
QuestionAmy BView Question on Stackoverflow
Solution 1 - .NetJoeView Answer on Stackoverflow
Solution 2 - .NetAndrew HareView Answer on Stackoverflow
Solution 3 - .NetTadas ŠukysView Answer on Stackoverflow
Solution 4 - .NetAmy BView Answer on Stackoverflow
Solution 5 - .NetChristian HayterView Answer on Stackoverflow
Solution 6 - .Netshamp00View Answer on Stackoverflow
Solution 7 - .NetsgmooreView Answer on Stackoverflow