Last week, at the end of the article, I presented a problem on floating point values:

Given the range of non-negative 2-decimal-point values less than 1, what are the values that can be represented exactly?

I was referring to dollars and cents, monetary values. As of this writing, there’s no answer submitted. I will assume that you’re very modest, and so will provide my version of the answer. First, we dissect the question. “non-negative” means cannot be negative, so zero and positive numbers only. “less than 1” means, well less than 1. And the values are restricted to numbers with up to 2 decimal points of accuracy.

So this means the set of values 0.00, 0.01, 0.02 … 0.97, 0.98, and 0.99 (1.00 not counted).

Before writing this article, I already an answer, intuited without actual calculations. Then I went to do some research of mine to verify that answer. I was **mortified** that my understanding of the IEEE floating point standard was wrong. I dug up my old university programming textbook, where there’s a section on internal floating point representation, and confirmed my misunderstanding.

My error was in how the exponent and mantissa were represented. In the end, I realised I had to convert all the 100 values (0.00 to 0.99) into binary representation to check. I didn’t relish doing long division by hand for a possible indefinite number of iterations (up to 32 anyway), and for 100 values. So I took the easier way out. I wrote a program. *smile*

decimal decBinary = 0.5m; decimal decBuffer = 0; int i, j; for (i = 0; i < 100; ++i) { decBuffer = i * 0.01m; Console.Write("{0} ", decBuffer.ToString("f2")); decBinary = 0.5m; for (j = 0; j < 32; ++j) { if (decBuffer < decBinary) { Console.Write("0"); } else { Console.Write("1"); decBuffer -= decBinary; } decBinary /= 2; } if (decBuffer != 0) { Console.WriteLine(" non-terminating"); } else { Console.WriteLine(" terminated"); } }

It's in C# and there's not much comments. But you should still be able to follow much of the code, even if you aren't familiar in C#. This is assuming a 32-bit is used to represent a `float`

. I ran the long division iteration up to 32 times, even though in a real representation, only 23 bits are used. Here's a Java applet IEEE binary converter you can play with too.

What happens is that, if after 32 bits are assigned (a 0 or 1), if more bits need to be assigned (not terminated), then the floating point representation isn't exact (since there's extra information truncated). You will then note that **only 0.00, 0.25, 0.50 and 0.75 have terminating binary representations**. This means, only those 4 values can be represented in a `float`

exactly.

**So, what was my original intuitive answer?** Let's look at only 4 binary places in the mantissa. So 1010 represents 1*2^(-1) + 0*2^(-2) + 1*2^(-3) + 0*2^(-4), which is 0.5 + 0.125 = 0.625

Look at the value with only the 3rd mantissa digit as 1, 0010. It represents 0.125 in value. How do you "get rid" of the 0.005 to get 0.12 or 0.13? You can't, not from switching on bits in any of the mantissa digits. So what does this mean? The moment the 3rd mantissa digit is a 1, the floating point representation can *never* represent any of the 100 2-decimal-point values exactly.

In fact, the moment any mantissa digit that's the 3rd or after (4th, 5th and so on) is a 1, that representation is inexact already. So only the 1st and 2nd mantissa digits are "allowed" to be 1. So 0000 is 0.00, 0100 is 0.25, 1000 is 0.5, and 1100 is 0.75.

Same answer as from my laborious calculations. Out of the 100 values, only combinations of quarters or 25 cents have an exact floating point representation.

I hope this has been an interesting thought process for you. Has this changed the way you think about storing monetary values?

[…] Then he showed me a piece of code on string concatenation and asked me how to make it faster. Then he showed me another piece of code, and asked about the rounding errors of the monetary calculations (oh I know that one!). […]