What is Truncation Error and How to Calculate it?

Truncation error occurs when an infinite process is shortened after a finite number of terms or iterations. Learn how to calculate it and how it affects numerical methods.

What is Truncation Error and How to Calculate it?

Truncation error is a type of error that occurs when an infinite process is shortened after a finite number of terms or iterations, or when the limit is not reached. It is a common source of error in numerical methods used to solve continuous problems. In numerical analysis and scientific computation, it is the discrepancy that arises when performing a finite number of steps to approximate a process with infinite steps. The truncation error can be calculated by hand or by symbolic software without specifying the differential equation and the discrete model in a special case.

For one-step methods, the local truncation error gives us a measure to determine how the solution to the differential equation does not solve the difference equation. Knowing the truncation error or other error measures is important for program verification by empirically establishing convergence rates. The truncation error generally increases as the step size increases, while the rounding error decreases as the step size increases. Occasionally, by mistake, the rounding error (the consequence of using finite-precision floating-point numbers in computers) is also called a truncation error, especially if the number is rounded by cutting.

In numerical analysis and scientific computation, truncation error is the error caused by the approximation of a mathematical process. It is present even with infinite precision arithmetic, because it is caused by the truncation of Taylor's infinite series to form the algorithm. Often, the truncation error also includes the discretization error, which is the error that arises when taking a finite number of steps in a calculation to approximate an infinite process. For example, in numerical methods for ordinary differential equations, the continuously varying function that is the solution of the differential equation is approximated by a process that progresses step by step, and the error that this implies is a discretization or truncation error.

See Truncation Failed for more information. Given an infinite series, it is possible to calculate its truncation error if only a finite number of terms are used. For example, if we approximate the sine function by the first two non-zero terms of its Taylor series, as in small, the resulting error is a truncation error.

Charlotte Wilson
Charlotte Wilson

Friendly travel advocate. Freelance zombie scholar. Extreme web practitioner. Evil coffee buff. Professional beer practitioner.

Leave Reply

Required fields are marked *