The **truncation error** is the result of ignoring all terms in an infinite series except a finite number. In numerical analysis and scientific computation, the error of truncation is the error that is made when approximating an infinite sum to a finite sum. For example, if we approximate the sine function by the first two non-zero terms of its Taylor series, as in small, the resulting error is a truncation error. It is present even with infinite precision arithmetic, because it is caused by the truncation of Taylor's infinite series to form the algorithm.

Often, the truncation error also includes the discretization error, which is the error that arises when taking a finite number of steps in a calculation to approximate an infinite process. For example, in numerical methods for ordinary differential equations, the continuously varying function that is the solution of the differential equation is approximated by a process that progresses step by step, and the error that this implies is a discretization or truncation error. When a numerical process is truncated after a finite number of iterations to simplify the calculation, a truncation error occurs. The truncation error can be reduced by using a better numerical model that usually increases the number of arithmetic operations.

The total numerical error in a process can be calculated as the sum of rounding errors and truncation errors in the process. Given an infinite series, one can calculate its truncation error by using a Reimann sum of two segments on the left with the same width of segments. Occasionally, by mistake, the rounding error (the consequence of using finite-precision floating-point numbers in computers) is also called a truncation error, especially if the number is rounded by cutting.

## Leave Reply