The number of Newton iterations greatly impacts performance of the estimate calculations, so experiment with those and decide if the accuracy is worth the cost in the integer hacks. On modern PCs, the standard floating point calculation is probably faster. It will also time various inverse square root methods. Here's a small program for testing the method. The actual source or method that generated the classic constant is not known. The constant 0x3fb759df does this, which shows the mantissa have a value of 0.432430.īetter guesses for a constant have been computed and they are close to the classic estimate. In todays thrilling installment, the origins of one of the more famous snippets of graphics code in recent years is under the microscope Quake3s Fast InvSqrt (), which has been known to cause strong geeks to go wobbly in the knees while contemplating its simple beauty and power. If the E part of 0x5f3759df is set to 127, the result will be \(1.M\). Note that the error is smallest near 0.45. Minimizing this error over the function means optimizing nine possible cases. Lomont approximates it by minimizing the maximum error along the piecewise function. While the classic exponent can be derived, no one knows how the classic mantissa was derived. For the classic constant value, one of the cases can never occur, resulting in a three-part function. Given appropriate guesses for \(M_g\) and \(E_g\), they form a piecewise linear function that approximates the inverse square root curve. These functions represent estimates for the inverse square root. So, given the function we wish to solve is \(\frac \: E is odd, M_g < M/2 That is, given an estimated solution to a function \(f\) (\(x_0\)), a better solution can be found by following the slope of the tangent (derivative) back towards the x-axis. Newton's method is an algorithm for finding roots (solutions to equations). There is now some error data in the mantissa from the shift, but the exponent should be the original exponent negated and divided by two: the inverse square root with a bit of error. So, subtracting the shifted input exponent removes the extra 63 from the shifted one exponent, leaving a bias of 127 and the negative shifted input exponent. The input value has an exponent bias of 127 and while shifting it divides the exponent by two (resulting in the square root), it leaves a bias of 63. By adding the 1's exponent to its right shifted value, it approximates an E value of 190 (with some error in the mantissa): The number 1 has a exponent of 0 and a mantissa of 0. A bitwise right shift should normally do this, but since the exponent field is biassed, it takes more. This one works by dividing the exponent by 2. This is based on Blinn's write up (see references). Shifts in binary work as the do in decimal. The structure of the floating point binary format.The effect of basic operations on numbers and binary patterns.In order to understand this hack, three things need to be reviewed: y = y * ( threehalfs - ( x2 * y * y ) ) // 2nd iteration, this can be removed Y = y * ( threehalfs - ( x2 * y * y ) ) // 1st iteration I = * ( long * )
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |