If you have ever used Excel, Python or any other computer related software that uses Decimal values or Floating Points numbers, you might have noticed something odd happening when doing any form of Arithmetic such as adding two Decimal or Floating Point values together. In some cases, even a simple Addition equation, results in long, recurring and inaccurate decimal values, instead of just one or two that are properly rounded up or down.
This issue is not the fault of Python, Excel or any other programming language, but rather how Floating Point values work in computer binary, causing them to be inaccurate in some cases.
NOTE: This tutorial requires that Python is already installed, as well as an IDE (Integrated Development Environment), such as PyCharm. It’s also recommended that you have a basic understanding of How to Use Python if this is your first time learning Python, as well as Python Variables and Data Types.
With that said, let’s take a deeper look into Why Decimal Numbers are Different in Python.
Why Adding Two Decimal Values Gives incorrect Answer
Let’s take a look at some examples where adding to decimals or Floating Point values in Python results in long, non-precise decimal values.
print(0.1 + 0.2)
#>> 0.30000000000000004
Here we can see the problem that “0.1 + 0.2 = 0.30000000000000004” and not “0.3” which it should accurately be.
Here is another example:
print(0.3 + 0.6)
#>> 0.8999999999999999
In this example, the equation “0.3 + 0.6 = 0.8999999999999999” results in a long recurring decimal value instead of it being rounded up to “0.9” which it should accurately be.
One last example are of two random floating point values:
print(4.34 + 3.79)
#>> 8.129999999999999
Here we can again see that the equation “4.34 + 3.79 = 8.129999999999999” is not accurately rounded up to “8.13”.
IEEE-754 Standard
In order to understand why this is happening, we first need to understand how Floating Point numbers are represented in computers.
This is also known as IEEE-754, which is the IEEE (Institute of Electrical and Electronics Engineers) Standard for Floating-Point Arithmetic.
The standard addressed many problems found in floating-point implementations that made them difficult to use reliably, including:
- Arithmetic formats: sets of binary and decimal floating-point data, which consist of finite numbers
- Interchange formats: encodings (bit strings) that can be used when exchanging floating-point data in an efficient and compact form
- Rounding rules: properties that give a satisfactory result when rounding numbers during arithmetic and conversions
- Operations: arithmetic and other operations (such as trigonometric functions) in arithmetic formats
- Exception handling: indications of exceptional conditions, such as when dividing by zero, overflow, etc.
For this tutorial, we are most interested in the first point, which is the differences between the Binary and Decimal Systems.
How Integer numbers work in 64-Bit Binary
In many other programming languages, Floating Point is 32-bits, whereas in Python Floating Point is 64-bits, also known as Double Precision.
Out of the 64-bits, they are broken down as follows:
- 1st bit = Sign (+ or -)
- 0 is Positive and 1 is Negative.
- In other words, if the Binary number starts with 0 it’s a Positive number and if it starts with 1, it’s a Negative number.
- 2nd to 12th bit = Exponent (11-bits)
- 13th to 64th bit = Mantissa (52-bits)
To better illustrate this, lets use an Integer number:
- 0 100000000010 0100000000000000000000000000000000000000000000000000
The 64-bits of this Integer number are divided into these 3 elements:
- 0 = Sign (positive number)
- 100000000010 = Exponent (Binary number 1026)
- 0100000000000000000000000000000000000000000000000000 = Mantissa
To calculate what Integer number these bits represent, we have to follow a few equations for each element.
Sign
As the first bit is a 0, we know that this integer is a positive number.
Exponent
The exponent number is a Binary Number of 100000000010, which translates to a Decimal Number of 1026.
Using this Decimal Number of 1026, we then subtract 1023 from it.
- 1026 -1023 = 3
This results in an Exponent value of “3”The reason we use “1023”, is because of how Python’s 64-bit Double Precision works, when dealing with numbers. This allows Python to go into negative exponents, as well as allowing it to work with very small and very large numbers. This number will never change and will always be 1023, so simply put that to memory, even if you don’t understand why.
The 1st Column continues to increment up by one value again, while the 2nd Column remains at 0 and the 3rd column remains at 1. This gives us the number 101 to 109.
Once we reach this point, the 1st Column again resets to 0, the 2nd Column increments up one value to 1, while the 3rd Column remains at 1, resulting in what we know as the number 110 (one-hundred and ten).
This procedure then continues and repeats itself endlessly to create as big of a number as required from 1,000,000 (one million) to 1,000,000,000 (one billion) and so forth.
Mantissa
Next, we take the number value “2” (as this is base value of Binary) and multiply it by the power of the output Exponent value of “3”:
- 2^3
While it is tempting to conclude that this results in an output value of 8, it does not, because we are dealing with Binary and not Decimal values.
What this is instead used for is to multiply it by “1.” followed by the Mantissa, which in this case is 0100000000000000000000000000000000000000000000000000, leaving us with this following equation:
- 2^3 * 1.0100000000000000000000000000000000000000000000000000
Again, we are using the Binary System and not the Decimal system, so the result will NOT be 8.08.
Binary to Decimal
Instead, the Exponent value of “3” tells us to move the decimal dot, three spaces to the right.
This results in a value of 1010.0000000000000000000000000000000000000000000000000 or “1010” for short.
“1010” is again a Binary number, which translates to a decimal value of “10”.
For this reason, we can conclude that the 64-bit value of:
- 0 100000000010 0100000000000000000000000000000000000000000000000000 = Decimal 10
Please read my tutorial on What Binary is and How to use Python to find any Binary value from a Decimal.
64-bit Double Precision Equation Summary
To summarize, the 64-bit (double precision) equation works as follows:
- 0 (positive number)
- 100000000010 (binary) = 1026 (decimal) – 1023 = 3 (exponent)
- 2^3 * 1.010 (1.mantissa) = 1010 (binary)
- 1010 (binary) = 10 (decimal)
More Binary to Decimal Examples
Let’s take a look at another example:
- 0 100000000011 0111000000000000000000000000000000000000000000000000
- Positive -> 1027 – 1023 = 4 -> 2^4 * 1.0111 = 10111 -> 23
And another example:
- 0 101010001011 10000000000000000000000000000000000000000000000000000
- Positive -> 2699 – 1023 = 1676 -> 2^1676 * 1.1000 = 11000 -> 6755399441055744
How Floating Point numbers work in 64-Bit Binary
Now that we better understand how 64-bit numbers work with Integers, let’s take a look at how they work with Floating Point values.
- 0 01111111111 10000000000000000000000000000000000000000000000000000
- Positive -> 1023 – 1023 = 0 -> 2^0 * 1.1000 = 1.1 (in binary)
This time the result is 1.1 in Binary. Therefore we need to still convert the Binary of 1.1 into Decimal. To do that we calculate each side of the decimal place separately.
First we calculate the left side of the decimal which is 1, by multiplying it by an exponent of 0 (which we got from our equation of 1023 – 1023):
- 2^0 = 1
Next, we calculate the right side of the decimal which is .1 by multiply it by an exponent of -1:
- 2^-1 = 0.5
If we then add these two values together we get 1.5:
- 2^0 + 2^-1 = 1 + 0.5 = 1.5
How to use Python to show find 64-bit Double Precision Binary of a Decimal Number
We can use Python to do the reverse of what we just learnt, by inputting a Decimal value and letting it calculate the Binary representation of the in 64-bit Double precision.
To do that we need to make use of “struct” module:
import struct
decimal =1.5
binary_rep = struct.pack('>d', decimal)
binary_rep = "".join(f"{b:08b}" for b in binary_rep)
binary_rep = binary_rep[0] + " " + binary_rep[1:12] + " " + binary_rep[12:]
print(f"{decimal} is {binary_rep}")
#>> 1.5 is 0 01111111111 1000000000000000000000000000000000000000000000000000
Python outputs Decimal 1.5 as the Double Precision binary of 0 01111111111 1000000000000000000000000000000000000000000000000000, which is identical to our earlier result, when calculating it the other way around.
Actual value of Floating Point Numbers in Python
With this knowledge, let’s take a look at what the actual representation of Floating Point numbers are in Python.
First let’s print out three basic Floating point values of 0.1, 0.2 and 0.3 in Python:
print(0.1)
print(0.2)
print(0.3)
#>> 0.1
#>> 0.2
#>> 0.3
While this might look correct, Python is actually not showing us all 64-bit decimal values, and therefor these are only representations.
To get the actual 64-bit Floating Point values, we need to make use of Python f-String with a precision value:
print(f"{0.1:.64f}")
print(f"{0.2:.64f}")
print(f"{0.3:.64f}")
#>> 0.1000000000000000055511151231257827021181583404541015625000000000
#>> 0.2000000000000000111022302462515654042363166809082031250000000000
#>> 0.2999999999999999888977697537484345957636833190917968750000000000
These are the accurate decimal values of these Floating Points, and as we can clearly see, the results are very different to what we saw previously.
In other words:
- When you type “0.1“in Python you are actually working with “0.1000000000000000055511151231257827021181583404541015625” and not 0.1.
- The same applies when typing in “0.2“, you are actually working with “0.200000000000000011102230246251565404236316680908203125“
- And most importantly, when typing in “0.3“, you are actually working with “0.299999999999999988897769753748434595763683319091796875“, which technically is a value below 0.3
This becomes a bigger issue when using the Addition Arithmetic Operator to add 0.1 and 0.2 together to also represent 0.3:
print(f"{0.1 + 0.2:.64f}")
#>> 0.3000000000000000444089209850062616169452667236328125000000000000
This time we get a value of “0.3000000000000000444089209850062616169452667236328125” instead of “0.3“.
Now, not only is this not actually 0.3 but it’s also different to the 0.299999999999999988897769753748434595763683319091796875 which we saw earlier from print(f”{0.3:.64f}”), which was also supposed to represent 0.3.
So how can both 0.299999999999999988897769753748434595763683319091796875 and 0.3000000000000000444089209850062616169452667236328125 represent 0.3?
Well, they can’t because Python sees them as different values, which we can check by using the Python Is Equal operator “==”:
print(f"{0.3:.64f}" == f"{0.1 + 0.2:.64f}" )
#>> False
Python outputs a “False” result, confirming that it too does not see these two values as equal or the same number.
The result will be the same if we simply type it as a basic Arithmetic equation:
print(0.3 == 0.1 + 0.2 )
#>> False
We can wholeheartedly confirm that the result of “0.1+0.2” and “0.3” are not the same numbers.
Using 64-bit Binary to identify why Floating Point values Differ in Python
Now that we indefinitely know that “0.1+0.2” and “0.3” are not the same numbers, the question of how to solve this problem remains.
Before we can solve this, let’s first dive even deeper to find the root cause of this.
To do that we are once again going to make use of Python to find the Double Precision Binary of 0.1, 0.2, 0.3 and 0.1+0.2.
import struct
decimal_A = 0.1
binary_rep = struct.pack('>d', decimal_A,)
binary_rep = "".join(f"{b:08b}" for b in binary_rep)
binary_rep = binary_rep[0] + " " + binary_rep[1:12] + " " + binary_rep[12:]
print(f"{decimal_A} is {binary_rep}")
decimal_B = 0.2
binary_rep = struct.pack('>d', decimal_B,)
binary_rep = "".join(f"{b:08b}" for b in binary_rep)
binary_rep = binary_rep[0] + " " + binary_rep[1:12] + " " + binary_rep[12:]
print(f"{decimal_B} is {binary_rep}")
decimal_C = 0.3
binary_rep = struct.pack('>d', decimal_C,)
binary_rep = "".join(f"{b:08b}" for b in binary_rep)
binary_rep = binary_rep[0] + " " + binary_rep[1:12] + " " + binary_rep[12:]
print(f"{decimal_C} is {binary_rep}")
decimal_Sum = 0.1+0.2
binary_rep = struct.pack('>d', decimal_Sum,)
binary_rep = "".join(f"{b:08b}" for b in binary_rep)
binary_rep = binary_rep[0] + " " + binary_rep[1:12] + " " + binary_rep[12:]
print(f"sum is {binary_rep}")
#>> 0.1 is 0 01111111011 1001100110011001100110011001100110011001100110011010
#>> 0.2 is 0 01111111100 1001100110011001100110011001100110011001100110011010
#>> 0.3 is 0 01111111101 0011001100110011001100110011001100110011001100110011
#>> sum is 0 01111111101 0011001100110011001100110011001100110011001100110100
Now that we have all four of these decimal values under the microscope, broken down into the individual 64-bits, we can better analyze what’s going on under the hood, in order to try to identify the problem.
- We can see that 0.1 and 0.2 have different Exponent values, but have the same Mantissa value, which is to be expected.
- We can also see that there is a repeating pattern in the Mantissa of “1001”, “1001”, “1001” etc. However as we only have 52-bit in the Mantissa, we will have to cut this pattern off after we run out of bits, resulting in the last pattern on complete half way, at “10” and not “1001”
- We can see that 0.3 has a different Exponent value to 0.1 and 0.2, but is the same as the “0.1+0.2”, which is correct.
- However, we can also see that 0.3 and the sum of “0.1+0.2” have different Mantissa values, and this is where the problem lies.
The sum of “0.1+0.2” is the result of the Mantissa value of “0.1” and “0.2”, that didn’t get to complete their repeating pattern and therefore don’t have enough information to create correct Decimal values of each. When we then add them together this issue compounds itself.
How do I fix Decimal Numbers in Python?
Now that we know what the problem is, how do we solve it?
Well there are actually two ways to solve this problem, using the round() Function and the decimal.Decimal module
How to Use the round() Function to Correct Floating Point Values
Any easy way to solve the infinite number of decimal values in a Python Floating Point is to simply remove them by rounding off the number, while specifying the amount of numbers.
print(0.1 + 0.2)
#>> 0.30000000000000004
print(round(0.1 + 0.2, 2))
#>> 0.3
Here we are simply rounding off the original value of “0.30000000000000004” to 2 values, leaving us with one value after the decimal.
We can easily increase the amount of values after the decimal by changing to 2 to another number such as 4, but because they are all 0, Python will automatically remove them:
print(0.1 + 0.2)
#>> 0.30000000000000004
print(round(0.1 + 0.2, 4))
#>> 0.3
If however we use different floating point numbers that contain non-zero values after the decimal they will be shown:
print(0.1421235 + 0.2565627)
#>> 0.3986862
print(round(0.1421235 + 0.2565627, 4))
#>> 0.3987
Finally let’s check to make sure that Python see the rounded off results of the rounded off “0.1+0.2” as 0.3 by again making use of the Python IsEqual operator “==”:
decimal_Sum = 0.1+0.2
result = round(decimal_Sum, 2)
print(result == 0.3)
#>> True
Great, Python confirms that “0.1+0.2 == 0.3” with a “True” output result.
How to Use the decimal.Decimal Module to Correct Floating Point Values
The decimal.Decimal Module is another great way to make sure that accurate Floating Point values are used in Python.
from decimal import Decimal
print(Decimal("0.1"))
print(Decimal("0.2"))
print(Decimal("0.3"))
print(Decimal("0.1") + Decimal("0.2"))
#>> 0.1
#>> 0.1
#>> 0.3
#>> 0.3
Using the decimal.Decimal Module, we again get accurate and consistent decimal results, even when using the Addition operator to calculate “0.1+0.2”.Let’s again confirm that “0.3” is the same as “0.1+0.2 = 0.3” by making use of the Python IsEqual operator “==”:
from decimal import Decimal
decimal_A = Decimal("0.1")
decimal_B = Decimal("0.2")
decimal_C = Decimal("0.3")
decimal_Sum = Decimal("0.1") + Decimal("0.2")
print(decimal_C == decimal_Sum)
#>> True
Python again confirms that “0.3 == 0.1+0.2 = 0.3” with a “True” output result.
Conclusion
I hope you enjoyed this tutorial on Why Floating Point Numbers are Different in Python. This should give you the starting knowledge you need to easily implement it into your next Python code or project.
Binary in Python Summary
Here is a summary of what we learnt about Binary and how to calculate it using Python:
- We looked at an example of how “0.1 + 0.2 = 0.30000000000000004” and not “0.3” in Python as well as a few other examples.
- The IEEE-754, which is the IEEE (Institute of Electrical and Electronics Engineers) Standard for Floating-Point Arithmetic, for both binary and decimal floating point data.
- Floating Point data is either in a 32-bit format (Single Precision) or 64-bit (Double Precision) as in the case of Python.
- The 64-bits of a Double Precision Binary value is divided into three parts, being the Sign (1st bit), the Exponent (next 11-bits) and the Mantissa (last 52-bits), which represent what the final decimal value in Python will be. This works for both Integer and Floating Point values.
- Python code can be used to also reverse this process, allowing us to enter a decimal value into Python and it giving us the 64-bit Binary code breakdown.
- Python f-string can also be used with a precision value to precisely show us exactly what the floating point or decimal value in python is, to as many decimal values as we enter into the precision value.
- These Floating point inaccuracies can however be fixed using either the ()round Function or the decimal.Decimal Module.
Share the love
If you enjoyed this tutorial content and would like others to benefit from it as well, make sure to share it on your favorite Social Media platform, using the share buttons below.
Subscribe for News & Discount Offers
Subscribe right now to gain instant access to the latest news and tutorials while also being the first to know about, exclusive offers and discounts!