Forum Moderators: coopster
Anyone do this differently, and if so, why?
Always looking for a better mousetrap -- coopster
I'm with you on the method for storing monetary values. DECIMAL stores numbers in string format, so precision is maintained. Storing numbers in floating point will only give you an approximation, and usually accurately if implemented correctly.
In an 8-bit integer, the first bit stands for 1, the second for 2, then 4, then 8, then 16, then 32, 64, and 128. In a floating point number, the bits on the right side of the decimal work the same way but in reverse: 1/2, 1/4, 1/8, 1/16, 1/32, 1/64, 1/128, 1/256, etc (depending on the number of bits).
Storing a value of 0.1 can only be an approximation. With an 8-bit floating point variable, it would look like this in binary:
.00011001
And when converting this literally to decimal, it comes out to be 0.09765625. This is why floating point values are normally stored using 64 bits, not 8. But in almost all cases, floating points are only approximations.. ones that get more an more accurate with more and more bits being used, but approximations nonetheless.
For values where accuracy is mandatory, like currency.. stick with DECIMAL which stores the value as text.
Note that some currencies have only a major unit, and others have a major and a minor unit. The minor unit may be subdivided in many ways: a tenth, or hundredth, or thousandth, or something else, of the major unit.