The Difference Between Decimal, Double, and Float (Examples in C#)

The Decimal, Double, and Float variable types are very similar in the fact that they use a decimal in their value (Actually called the Radix Point), but are different in the way they store that value (Amount of bits reserved for the mantissa and exponent), and should be used in specific situations versus interchangeable. The main difference is Floats(Single) and Doubles are binary floating point types (32bit and 64bit respectively that is represented as base 2). A Decimal will store the value as a floating decimal point type (128-bit data type that is represented internally as base 10 instead of base two). A decimal is more precise because it can actually store more precision, than a double which is half its size, or a float which is a ¼ of a decimal’s size. Note that a double can have a higher range than a decimal, but loses precision because of this.

Properties At A Glace

Float Category Type Suffix Precision Approximate range .NET Framework type
Float 32 Numeric, floating-point F or f  7 digits  -3.4 × 1038to +3.4 × 1038  System.Single
Double 64 Numeric, floating-point D or d  15-16 digits  ±5.0 × 10−324 to ±1.7 × 10308  System.Double
Decimal 128 Numeric, decimal M or m  28-29 significant digits  (-7.9 x 1028 to 7.9 x 1028) / (100 to 28)  System.Decimal

More on Types in C#

Examples Of Use In C#

Decimal

//Have to use the m Type Suffix
decimal ExDecimal = 25.5m;

Note : To convert Decimal in C#, due to the difference in data storage as mentioned above, you must cast it. This is not true when using floats or doubles however.

Double

//Have to use the d Type Suffix
double ExDouble = 25.5d;

Float

//Have to use the f Type Suffix
float ExFloat = 25.5f;

When To Use Which Type

So now that you know the differences and similarities, how do you know in which situations to use them? The short version is which ever type that will provide enough precision for what you need. However we are going to dive a little deeper.

Decimal

Anytime you need a high level of accuracy and need to avoid rounding errors, go with decimal. Most of the time in financial applications decimals are used for this reason.

Double

Doubles are used in most situations that don’t use money. Since most modern CPUs can handle doubles as quick as floats in common data tracking applications, there is really no need to use a float unless you are targeting older machines.

Float

Use float for data that can endure rounding errors, and have very high demands for processing powers, such as graphics.

Well, I hope this helps clear up any confusion in which you may have had. Happy coding!

Jacob Saylor

Software developer in Kentucky

3 Responses

  1. One very important reason to use decimal rather than float/double is to prevent automatic usage of scientific notation when there is a lot of converting between strings and numbers. In our environment we run into this sporadically, so we steer clear of float/decimal to prevent accidents.

Leave a Reply to JacobHeater Cancel reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: