| Zachary Ware | c483a01 | 2016-08-09 17:44:52 -0500 | [diff] [blame] | 1 | .. testsetup:: | 
|  | 2 |  | 
|  | 3 | import math | 
|  | 4 |  | 
| Georg Brandl | 116aa62 | 2007-08-15 14:28:22 +0000 | [diff] [blame] | 5 | .. _tut-fp-issues: | 
|  | 6 |  | 
|  | 7 | ************************************************** | 
|  | 8 | Floating Point Arithmetic:  Issues and Limitations | 
|  | 9 | ************************************************** | 
|  | 10 |  | 
|  | 11 | .. sectionauthor:: Tim Peters <tim_one@users.sourceforge.net> | 
|  | 12 |  | 
|  | 13 |  | 
|  | 14 | Floating-point numbers are represented in computer hardware as base 2 (binary) | 
|  | 15 | fractions.  For example, the decimal fraction :: | 
|  | 16 |  | 
|  | 17 | 0.125 | 
|  | 18 |  | 
|  | 19 | has value 1/10 + 2/100 + 5/1000, and in the same way the binary fraction :: | 
|  | 20 |  | 
|  | 21 | 0.001 | 
|  | 22 |  | 
|  | 23 | has value 0/2 + 0/4 + 1/8.  These two fractions have identical values, the only | 
|  | 24 | real difference being that the first is written in base 10 fractional notation, | 
|  | 25 | and the second in base 2. | 
|  | 26 |  | 
|  | 27 | Unfortunately, most decimal fractions cannot be represented exactly as binary | 
|  | 28 | fractions.  A consequence is that, in general, the decimal floating-point | 
|  | 29 | numbers you enter are only approximated by the binary floating-point numbers | 
|  | 30 | actually stored in the machine. | 
|  | 31 |  | 
|  | 32 | The problem is easier to understand at first in base 10.  Consider the fraction | 
|  | 33 | 1/3.  You can approximate that as a base 10 fraction:: | 
|  | 34 |  | 
|  | 35 | 0.3 | 
|  | 36 |  | 
|  | 37 | or, better, :: | 
|  | 38 |  | 
|  | 39 | 0.33 | 
|  | 40 |  | 
|  | 41 | or, better, :: | 
|  | 42 |  | 
|  | 43 | 0.333 | 
|  | 44 |  | 
|  | 45 | and so on.  No matter how many digits you're willing to write down, the result | 
|  | 46 | will never be exactly 1/3, but will be an increasingly better approximation of | 
|  | 47 | 1/3. | 
|  | 48 |  | 
|  | 49 | In the same way, no matter how many base 2 digits you're willing to use, the | 
|  | 50 | decimal value 0.1 cannot be represented exactly as a base 2 fraction.  In base | 
|  | 51 | 2, 1/10 is the infinitely repeating fraction :: | 
|  | 52 |  | 
|  | 53 | 0.0001100110011001100110011001100110011001100110011... | 
|  | 54 |  | 
| Raymond Hettinger | 8bd1d4f | 2009-04-24 03:09:06 +0000 | [diff] [blame] | 55 | Stop at any finite number of bits, and you get an approximation.  On most | 
|  | 56 | machines today, floats are approximated using a binary fraction with | 
| Raymond Hettinger | 1d18068 | 2009-06-28 23:21:38 +0000 | [diff] [blame] | 57 | the numerator using the first 53 bits starting with the most significant bit and | 
| Raymond Hettinger | 8bd1d4f | 2009-04-24 03:09:06 +0000 | [diff] [blame] | 58 | with the denominator as a power of two.  In the case of 1/10, the binary fraction | 
|  | 59 | is ``3602879701896397 / 2 ** 55`` which is close to but not exactly | 
|  | 60 | equal to the true value of 1/10. | 
| Georg Brandl | 116aa62 | 2007-08-15 14:28:22 +0000 | [diff] [blame] | 61 |  | 
| Raymond Hettinger | 8bd1d4f | 2009-04-24 03:09:06 +0000 | [diff] [blame] | 62 | Many users are not aware of the approximation because of the way values are | 
|  | 63 | displayed.  Python only prints a decimal approximation to the true decimal | 
|  | 64 | value of the binary approximation stored by the machine.  On most machines, if | 
|  | 65 | Python were to print the true decimal value of the binary approximation stored | 
|  | 66 | for 0.1, it would have to display :: | 
| Georg Brandl | 116aa62 | 2007-08-15 14:28:22 +0000 | [diff] [blame] | 67 |  | 
|  | 68 | >>> 0.1 | 
|  | 69 | 0.1000000000000000055511151231257827021181583404541015625 | 
|  | 70 |  | 
| Raymond Hettinger | 8bd1d4f | 2009-04-24 03:09:06 +0000 | [diff] [blame] | 71 | That is more digits than most people find useful, so Python keeps the number | 
|  | 72 | of digits manageable by displaying a rounded value instead :: | 
| Georg Brandl | 116aa62 | 2007-08-15 14:28:22 +0000 | [diff] [blame] | 73 |  | 
| Raymond Hettinger | 8bd1d4f | 2009-04-24 03:09:06 +0000 | [diff] [blame] | 74 | >>> 1 / 10 | 
|  | 75 | 0.1 | 
| Georg Brandl | 116aa62 | 2007-08-15 14:28:22 +0000 | [diff] [blame] | 76 |  | 
| Raymond Hettinger | 8bd1d4f | 2009-04-24 03:09:06 +0000 | [diff] [blame] | 77 | Just remember, even though the printed result looks like the exact value | 
|  | 78 | of 1/10, the actual stored value is the nearest representable binary fraction. | 
|  | 79 |  | 
|  | 80 | Interestingly, there are many different decimal numbers that share the same | 
|  | 81 | nearest approximate binary fraction.  For example, the numbers ``0.1`` and | 
|  | 82 | ``0.10000000000000001`` and | 
|  | 83 | ``0.1000000000000000055511151231257827021181583404541015625`` are all | 
|  | 84 | approximated by ``3602879701896397 / 2 ** 55``.  Since all of these decimal | 
| Georg Brandl | b58f46f | 2009-04-24 19:06:29 +0000 | [diff] [blame] | 85 | values share the same approximation, any one of them could be displayed | 
| Raymond Hettinger | 8bd1d4f | 2009-04-24 03:09:06 +0000 | [diff] [blame] | 86 | while still preserving the invariant ``eval(repr(x)) == x``. | 
|  | 87 |  | 
| Mark Dickinson | 6a74d34 | 2010-07-29 13:56:56 +0000 | [diff] [blame] | 88 | Historically, the Python prompt and built-in :func:`repr` function would choose | 
| Raymond Hettinger | eafaf4c | 2009-06-28 22:30:13 +0000 | [diff] [blame] | 89 | the one with 17 significant digits, ``0.10000000000000001``.   Starting with | 
| Raymond Hettinger | 8bd1d4f | 2009-04-24 03:09:06 +0000 | [diff] [blame] | 90 | Python 3.1, Python (on most systems) is now able to choose the shortest of | 
|  | 91 | these and simply display ``0.1``. | 
| Georg Brandl | 116aa62 | 2007-08-15 14:28:22 +0000 | [diff] [blame] | 92 |  | 
|  | 93 | Note that this is in the very nature of binary floating-point: this is not a bug | 
|  | 94 | in Python, and it is not a bug in your code either.  You'll see the same kind of | 
|  | 95 | thing in all languages that support your hardware's floating-point arithmetic | 
|  | 96 | (although some languages may not *display* the difference by default, or in all | 
|  | 97 | output modes). | 
|  | 98 |  | 
| Ezio Melotti | e130a52 | 2011-10-19 10:58:56 +0300 | [diff] [blame] | 99 | For more pleasant output, you may wish to use string formatting to produce a limited number of significant digits:: | 
| Georg Brandl | 116aa62 | 2007-08-15 14:28:22 +0000 | [diff] [blame] | 100 |  | 
| Mark Dickinson | 388122d | 2010-08-04 20:56:28 +0000 | [diff] [blame] | 101 | >>> format(math.pi, '.12g')  # give 12 significant digits | 
| Raymond Hettinger | 8bd1d4f | 2009-04-24 03:09:06 +0000 | [diff] [blame] | 102 | '3.14159265359' | 
| Georg Brandl | 116aa62 | 2007-08-15 14:28:22 +0000 | [diff] [blame] | 103 |  | 
| Mark Dickinson | 388122d | 2010-08-04 20:56:28 +0000 | [diff] [blame] | 104 | >>> format(math.pi, '.2f')   # give 2 digits after the point | 
|  | 105 | '3.14' | 
|  | 106 |  | 
| Raymond Hettinger | 8bd1d4f | 2009-04-24 03:09:06 +0000 | [diff] [blame] | 107 | >>> repr(math.pi) | 
|  | 108 | '3.141592653589793' | 
|  | 109 |  | 
| Raymond Hettinger | 8bd1d4f | 2009-04-24 03:09:06 +0000 | [diff] [blame] | 110 |  | 
|  | 111 | It's important to realize that this is, in a real sense, an illusion: you're | 
|  | 112 | simply rounding the *display* of the true machine value. | 
| Georg Brandl | 116aa62 | 2007-08-15 14:28:22 +0000 | [diff] [blame] | 113 |  | 
| Raymond Hettinger | f0320c7 | 2009-04-26 20:10:50 +0000 | [diff] [blame] | 114 | One illusion may beget another.  For example, since 0.1 is not exactly 1/10, | 
| Raymond Hettinger | 4af3629 | 2009-04-26 21:37:46 +0000 | [diff] [blame] | 115 | summing three values of 0.1 may not yield exactly 0.3, either:: | 
| Georg Brandl | 116aa62 | 2007-08-15 14:28:22 +0000 | [diff] [blame] | 116 |  | 
| Raymond Hettinger | 4af3629 | 2009-04-26 21:37:46 +0000 | [diff] [blame] | 117 | >>> .1 + .1 + .1 == .3 | 
|  | 118 | False | 
|  | 119 |  | 
|  | 120 | Also, since the 0.1 cannot get any closer to the exact value of 1/10 and | 
|  | 121 | 0.3 cannot get any closer to the exact value of 3/10, then pre-rounding with | 
|  | 122 | :func:`round` function cannot help:: | 
|  | 123 |  | 
|  | 124 | >>> round(.1, 1) + round(.1, 1) + round(.1, 1) == round(.3, 1) | 
|  | 125 | False | 
|  | 126 |  | 
|  | 127 | Though the numbers cannot be made closer to their intended exact values, | 
|  | 128 | the :func:`round` function can be useful for post-rounding so that results | 
| Raymond Hettinger | eafaf4c | 2009-06-28 22:30:13 +0000 | [diff] [blame] | 129 | with inexact values become comparable to one another:: | 
| Raymond Hettinger | 4af3629 | 2009-04-26 21:37:46 +0000 | [diff] [blame] | 130 |  | 
| Raymond Hettinger | eafaf4c | 2009-06-28 22:30:13 +0000 | [diff] [blame] | 131 | >>> round(.1 + .1 + .1, 10) == round(.3, 10) | 
| Raymond Hettinger | 4af3629 | 2009-04-26 21:37:46 +0000 | [diff] [blame] | 132 | True | 
| Georg Brandl | 116aa62 | 2007-08-15 14:28:22 +0000 | [diff] [blame] | 133 |  | 
|  | 134 | Binary floating-point arithmetic holds many surprises like this.  The problem | 
|  | 135 | with "0.1" is explained in precise detail below, in the "Representation Error" | 
|  | 136 | section.  See `The Perils of Floating Point <http://www.lahey.com/float.htm>`_ | 
|  | 137 | for a more complete account of other common surprises. | 
|  | 138 |  | 
|  | 139 | As that says near the end, "there are no easy answers."  Still, don't be unduly | 
|  | 140 | wary of floating-point!  The errors in Python float operations are inherited | 
|  | 141 | from the floating-point hardware, and on most machines are on the order of no | 
|  | 142 | more than 1 part in 2\*\*53 per operation.  That's more than adequate for most | 
| Raymond Hettinger | eafaf4c | 2009-06-28 22:30:13 +0000 | [diff] [blame] | 143 | tasks, but you do need to keep in mind that it's not decimal arithmetic and | 
| Georg Brandl | 116aa62 | 2007-08-15 14:28:22 +0000 | [diff] [blame] | 144 | that every float operation can suffer a new rounding error. | 
|  | 145 |  | 
|  | 146 | While pathological cases do exist, for most casual use of floating-point | 
|  | 147 | arithmetic you'll see the result you expect in the end if you simply round the | 
|  | 148 | display of your final results to the number of decimal digits you expect. | 
| Benjamin Peterson | e6f0063 | 2008-05-26 01:03:56 +0000 | [diff] [blame] | 149 | :func:`str` usually suffices, and for finer control see the :meth:`str.format` | 
|  | 150 | method's format specifiers in :ref:`formatstrings`. | 
| Georg Brandl | 116aa62 | 2007-08-15 14:28:22 +0000 | [diff] [blame] | 151 |  | 
| Raymond Hettinger | eba99df | 2008-10-05 17:57:52 +0000 | [diff] [blame] | 152 | For use cases which require exact decimal representation, try using the | 
|  | 153 | :mod:`decimal` module which implements decimal arithmetic suitable for | 
|  | 154 | accounting applications and high-precision applications. | 
|  | 155 |  | 
|  | 156 | Another form of exact arithmetic is supported by the :mod:`fractions` module | 
|  | 157 | which implements arithmetic based on rational numbers (so the numbers like | 
|  | 158 | 1/3 can be represented exactly). | 
|  | 159 |  | 
| Guido van Rossum | 0616b79 | 2007-08-31 03:25:11 +0000 | [diff] [blame] | 160 | If you are a heavy user of floating point operations you should take a look | 
|  | 161 | at the Numerical Python package and many other packages for mathematical and | 
| Serhiy Storchaka | 6dff020 | 2016-05-07 10:49:07 +0300 | [diff] [blame] | 162 | statistical operations supplied by the SciPy project. See <https://scipy.org>. | 
| Raymond Hettinger | 9fce0ba | 2008-10-05 16:46:29 +0000 | [diff] [blame] | 163 |  | 
|  | 164 | Python provides tools that may help on those rare occasions when you really | 
|  | 165 | *do* want to know the exact value of a float.  The | 
|  | 166 | :meth:`float.as_integer_ratio` method expresses the value of a float as a | 
|  | 167 | fraction:: | 
|  | 168 |  | 
|  | 169 | >>> x = 3.14159 | 
|  | 170 | >>> x.as_integer_ratio() | 
| Raymond Hettinger | eafaf4c | 2009-06-28 22:30:13 +0000 | [diff] [blame] | 171 | (3537115888337719, 1125899906842624) | 
| Raymond Hettinger | 9fce0ba | 2008-10-05 16:46:29 +0000 | [diff] [blame] | 172 |  | 
|  | 173 | Since the ratio is exact, it can be used to losslessly recreate the | 
|  | 174 | original value:: | 
|  | 175 |  | 
|  | 176 | >>> x == 3537115888337719 / 1125899906842624 | 
|  | 177 | True | 
|  | 178 |  | 
|  | 179 | The :meth:`float.hex` method expresses a float in hexadecimal (base | 
|  | 180 | 16), again giving the exact value stored by your computer:: | 
|  | 181 |  | 
|  | 182 | >>> x.hex() | 
|  | 183 | '0x1.921f9f01b866ep+1' | 
|  | 184 |  | 
|  | 185 | This precise hexadecimal representation can be used to reconstruct | 
|  | 186 | the float value exactly:: | 
|  | 187 |  | 
|  | 188 | >>> x == float.fromhex('0x1.921f9f01b866ep+1') | 
|  | 189 | True | 
|  | 190 |  | 
|  | 191 | Since the representation is exact, it is useful for reliably porting values | 
|  | 192 | across different versions of Python (platform independence) and exchanging | 
|  | 193 | data with other languages that support the same format (such as Java and C99). | 
|  | 194 |  | 
| Raymond Hettinger | 5afb5c6 | 2009-04-26 22:01:46 +0000 | [diff] [blame] | 195 | Another helpful tool is the :func:`math.fsum` function which helps mitigate | 
|  | 196 | loss-of-precision during summation.  It tracks "lost digits" as values are | 
|  | 197 | added onto a running total.  That can make a difference in overall accuracy | 
|  | 198 | so that the errors do not accumulate to the point where they affect the | 
|  | 199 | final total: | 
|  | 200 |  | 
|  | 201 | >>> sum([0.1] * 10) == 1.0 | 
|  | 202 | False | 
|  | 203 | >>> math.fsum([0.1] * 10) == 1.0 | 
|  | 204 | True | 
| Raymond Hettinger | 9fce0ba | 2008-10-05 16:46:29 +0000 | [diff] [blame] | 205 |  | 
| Georg Brandl | 116aa62 | 2007-08-15 14:28:22 +0000 | [diff] [blame] | 206 | .. _tut-fp-error: | 
|  | 207 |  | 
|  | 208 | Representation Error | 
|  | 209 | ==================== | 
|  | 210 |  | 
|  | 211 | This section explains the "0.1" example in detail, and shows how you can perform | 
|  | 212 | an exact analysis of cases like this yourself.  Basic familiarity with binary | 
|  | 213 | floating-point representation is assumed. | 
|  | 214 |  | 
|  | 215 | :dfn:`Representation error` refers to the fact that some (most, actually) | 
|  | 216 | decimal fractions cannot be represented exactly as binary (base 2) fractions. | 
|  | 217 | This is the chief reason why Python (or Perl, C, C++, Java, Fortran, and many | 
| Raymond Hettinger | 8bd1d4f | 2009-04-24 03:09:06 +0000 | [diff] [blame] | 218 | others) often won't display the exact decimal number you expect. | 
| Georg Brandl | 116aa62 | 2007-08-15 14:28:22 +0000 | [diff] [blame] | 219 |  | 
|  | 220 | Why is that?  1/10 is not exactly representable as a binary fraction. Almost all | 
|  | 221 | machines today (November 2000) use IEEE-754 floating point arithmetic, and | 
|  | 222 | almost all platforms map Python floats to IEEE-754 "double precision".  754 | 
|  | 223 | doubles contain 53 bits of precision, so on input the computer strives to | 
| Benjamin Peterson | 5c6d787 | 2009-02-06 02:40:07 +0000 | [diff] [blame] | 224 | convert 0.1 to the closest fraction it can of the form *J*/2**\ *N* where *J* is | 
| Georg Brandl | 116aa62 | 2007-08-15 14:28:22 +0000 | [diff] [blame] | 225 | an integer containing exactly 53 bits.  Rewriting :: | 
|  | 226 |  | 
|  | 227 | 1 / 10 ~= J / (2**N) | 
|  | 228 |  | 
|  | 229 | as :: | 
|  | 230 |  | 
|  | 231 | J ~= 2**N / 10 | 
|  | 232 |  | 
|  | 233 | and recalling that *J* has exactly 53 bits (is ``>= 2**52`` but ``< 2**53``), | 
|  | 234 | the best value for *N* is 56:: | 
|  | 235 |  | 
| Raymond Hettinger | 1d18068 | 2009-06-28 23:21:38 +0000 | [diff] [blame] | 236 | >>> 2**52 <=  2**56 // 10  < 2**53 | 
|  | 237 | True | 
| Georg Brandl | 116aa62 | 2007-08-15 14:28:22 +0000 | [diff] [blame] | 238 |  | 
|  | 239 | That is, 56 is the only value for *N* that leaves *J* with exactly 53 bits.  The | 
|  | 240 | best possible value for *J* is then that quotient rounded:: | 
|  | 241 |  | 
|  | 242 | >>> q, r = divmod(2**56, 10) | 
|  | 243 | >>> r | 
| Georg Brandl | bae1b94 | 2008-08-10 12:16:45 +0000 | [diff] [blame] | 244 | 6 | 
| Georg Brandl | 116aa62 | 2007-08-15 14:28:22 +0000 | [diff] [blame] | 245 |  | 
|  | 246 | Since the remainder is more than half of 10, the best approximation is obtained | 
|  | 247 | by rounding up:: | 
|  | 248 |  | 
|  | 249 | >>> q+1 | 
| Georg Brandl | bae1b94 | 2008-08-10 12:16:45 +0000 | [diff] [blame] | 250 | 7205759403792794 | 
| Georg Brandl | 116aa62 | 2007-08-15 14:28:22 +0000 | [diff] [blame] | 251 |  | 
| Raymond Hettinger | 1d18068 | 2009-06-28 23:21:38 +0000 | [diff] [blame] | 252 | Therefore the best possible approximation to 1/10 in 754 double precision is:: | 
| Georg Brandl | 116aa62 | 2007-08-15 14:28:22 +0000 | [diff] [blame] | 253 |  | 
| Raymond Hettinger | 1d18068 | 2009-06-28 23:21:38 +0000 | [diff] [blame] | 254 | 7205759403792794 / 2 ** 56 | 
| Georg Brandl | 116aa62 | 2007-08-15 14:28:22 +0000 | [diff] [blame] | 255 |  | 
| Raymond Hettinger | 8bd1d4f | 2009-04-24 03:09:06 +0000 | [diff] [blame] | 256 | Dividing both the numerator and denominator by two reduces the fraction to:: | 
|  | 257 |  | 
| Raymond Hettinger | 1d18068 | 2009-06-28 23:21:38 +0000 | [diff] [blame] | 258 | 3602879701896397 / 2 ** 55 | 
| Raymond Hettinger | 8bd1d4f | 2009-04-24 03:09:06 +0000 | [diff] [blame] | 259 |  | 
| Georg Brandl | 116aa62 | 2007-08-15 14:28:22 +0000 | [diff] [blame] | 260 | Note that since we rounded up, this is actually a little bit larger than 1/10; | 
|  | 261 | if we had not rounded up, the quotient would have been a little bit smaller than | 
|  | 262 | 1/10.  But in no case can it be *exactly* 1/10! | 
|  | 263 |  | 
|  | 264 | So the computer never "sees" 1/10:  what it sees is the exact fraction given | 
|  | 265 | above, the best 754 double approximation it can get:: | 
|  | 266 |  | 
| Raymond Hettinger | 8bd1d4f | 2009-04-24 03:09:06 +0000 | [diff] [blame] | 267 | >>> 0.1 * 2 ** 55 | 
|  | 268 | 3602879701896397.0 | 
| Georg Brandl | 116aa62 | 2007-08-15 14:28:22 +0000 | [diff] [blame] | 269 |  | 
| Raymond Hettinger | 1d18068 | 2009-06-28 23:21:38 +0000 | [diff] [blame] | 270 | If we multiply that fraction by 10\*\*55, we can see the value out to | 
|  | 271 | 55 decimal digits:: | 
| Georg Brandl | 116aa62 | 2007-08-15 14:28:22 +0000 | [diff] [blame] | 272 |  | 
| Raymond Hettinger | 1d18068 | 2009-06-28 23:21:38 +0000 | [diff] [blame] | 273 | >>> 3602879701896397 * 10 ** 55 // 2 ** 55 | 
| Raymond Hettinger | 8bd1d4f | 2009-04-24 03:09:06 +0000 | [diff] [blame] | 274 | 1000000000000000055511151231257827021181583404541015625 | 
| Georg Brandl | 116aa62 | 2007-08-15 14:28:22 +0000 | [diff] [blame] | 275 |  | 
| Raymond Hettinger | 1d18068 | 2009-06-28 23:21:38 +0000 | [diff] [blame] | 276 | meaning that the exact number stored in the computer is equal to | 
|  | 277 | the decimal value 0.1000000000000000055511151231257827021181583404541015625. | 
|  | 278 | Instead of displaying the full decimal value, many languages (including | 
|  | 279 | older versions of Python), round the result to 17 significant digits:: | 
|  | 280 |  | 
|  | 281 | >>> format(0.1, '.17f') | 
|  | 282 | '0.10000000000000001' | 
| Georg Brandl | 116aa62 | 2007-08-15 14:28:22 +0000 | [diff] [blame] | 283 |  | 
| Raymond Hettinger | 8bd1d4f | 2009-04-24 03:09:06 +0000 | [diff] [blame] | 284 | The :mod:`fractions` and :mod:`decimal` modules make these calculations | 
|  | 285 | easy:: | 
| Georg Brandl | 116aa62 | 2007-08-15 14:28:22 +0000 | [diff] [blame] | 286 |  | 
| Raymond Hettinger | 8bd1d4f | 2009-04-24 03:09:06 +0000 | [diff] [blame] | 287 | >>> from decimal import Decimal | 
|  | 288 | >>> from fractions import Fraction | 
| Raymond Hettinger | 1d18068 | 2009-06-28 23:21:38 +0000 | [diff] [blame] | 289 |  | 
|  | 290 | >>> Fraction.from_float(0.1) | 
|  | 291 | Fraction(3602879701896397, 36028797018963968) | 
|  | 292 |  | 
|  | 293 | >>> (0.1).as_integer_ratio() | 
|  | 294 | (3602879701896397, 36028797018963968) | 
|  | 295 |  | 
|  | 296 | >>> Decimal.from_float(0.1) | 
|  | 297 | Decimal('0.1000000000000000055511151231257827021181583404541015625') | 
|  | 298 |  | 
|  | 299 | >>> format(Decimal.from_float(0.1), '.17') | 
|  | 300 | '0.10000000000000001' |