The Single type contains four bytes of data, and its precision can range anywhere from 1.401298E-45 to 3.402823E38 for positive values and from -3.402823E38 to -1.401298E-45 for negative values.
It can seem strange that a value stored using four bytes (like the Integer type) can store a number that is larger than even the Long type. This is possible because of the way in which numbers are stored; a real number can be stored with different levels of precision. Note that there are six digits after the decimal point in the definition of the Single type. When a real number gets very large or very small, the stored value is limited by its significant places.
Because real values contain fewer significant places than their maximum value, when working near the extremes it is possible to lose precision. For example, while it is possible to represent a Long with the value of 9223372036854775805, the Single type rounds this value to 9.223372E18. This seems like a reasonable action to take, but it isn't a reversible action. The following code demonstrates how this loss of precision and data can result in errors. To run it, a Sub called Precision is added to the ProVB_VS2010 project and called from the Click event handler for the ButtonTest control:
Private Sub Precision()
Dim l As Long = Long.MaxValue
Dim s As Single = Convert.ToSingle(l)
TextBox1.Text = l & Environment.NewLine
TextBox1.Text & = s & Environment.NewLine s -= 1000000000000
l = Convert.ToInt64(s)
TextBox1.Text &= l & Environment.NewLine End Sub
The code creates a Long that has the maximum value possible, and outputs this value. Then it converts this value to a Single and outputs it in that format. Next, the value 1000000000000 is subtracted from the Single using the -= syntax, which is similar to writing s = s - 1000000000000. Finally, the code assigns the Single value back into the Long and then outputs both the Long and the difference between the original value and the new value. The results, shown in Figure 2-5, probably aren't consistent with what you might expect.
The first thing to notice is how the values are represented in the output based on type. The Single value actually uses an exponential display instead of displaying all of the significant digits. More important, as you can see, the result of what is stored in the
Code snippet from Forml
Single after the math operation actually occurs is not accurate in relation to what is computed using the Long value. Therefore, both the Single and Double types have limitations in accuracy when you are doing math operations. These accuracy issues result from storage limitations and how binary numbers represent decimal numbers. To better address these issues for large numbers, .NET provides the Decimal type.
The behavior of the previous example changes if you replace the value type of Single with Double. A Double uses eight bytes to store values, and as a result has greater precision and range. The range for a Double is from 4.94065645841247E-324 to 1.79769313486232E308 for positive values and from -1.79769313486231E308 to -4.94065645841247E-324 for negative values. The precision has increased such that a number can contain 15 digits before the rounding begins. This greater level of precision makes the Double value type a much more reliable variable for use in math operations. It's possible to represent most operations with complete accuracy with this value. To test this, change the sample code from the previous section so that instead of declaring the variable s as a Single you declare it as a Double and rerun the code. Don't forget to also change the conversion line from ToSingle to ToDouble. The resulting code is shown here with the Sub called PrecisionDouble:
Private Sub PrecisionDouble()
Dim l As Long = Long.MaxValue
Dim s As Double = Convert.ToDouble(l)
TextBoxl.Text = l & Environment.NewLine
TextBoxl.Text &= s & Environment.NewLine s -= 1000000000000
l = Convert.ToInt64(s)
TextBoxl.Text &= l & Environment.NewLine TextBoxl.Text &= Long.MaxValue - 1 End Sub
The results shown in Figure 2-6 look very similar to those from Single precision except they almost look correct. The result as you can see is off by just 1. On the other hand, this method closes by demonstrating how a 64-bit value can be modified by just one and the results are accurate. The problem isn't specific to .NET; it can be replicated in all major development languages. Whenever you choose to represent very large or very small numbers by eliminating the precision of the least significant digits, you have lost that precision. To resolve this, .NET introduced the Decimal, which avoids this issue.
The Decimal type is a hybrid that consists of a 12-byte integer value combined with two additional 16-bit values that control the location of the decimal point and the sign of the overall value. A Decimal value consumes 16 bytes in total and can store a maximum value of 79228162514264337593543950335. This value can then be manipulated by adjusting where the decimal place is located. For example, the maximum value while accounting for four decimal places is 7922816251426433759354395.0335. This is because a Decimal isn't stored as a traditional number, but as a 12-byte integer value, with the location of the decimal in relation to the available 28 digits. This means that a Decimal does not inherently round numbers the way a Double does.
As a result of the way values are stored, the closest precision to zero that a Decimal supports is 0.000000 0000000000000000000001. The location of the decimal point is stored separately; and the Decimal type stores a value that indicates whether its value is positive or negative separately from the actual value. This means that the positive and negative ranges are exactly the same, regardless of the number of decimal places.
9223372036854775807 9.22337203685478E+18 9223371036854775808 9223372036854775806
Thus, the system makes a trade-off whereby the need to store a larger number of decimal places reduces the maximum value that can be kept at that level of precision. This trade-off makes a lot of sense. After all, it's not often that you need to store a number with 15 digits on both sides of the decimal point, and for those cases you can create a custom class that manages the logic and leverages one or more decimal values as its properties. You'll find that if you again modify and rerun the sample code you've been using in the last couple of sections that converts to and from Long values by using Decimals for the interim value and conversion, now your results are completely accurate.
Was this article helpful?