Error Bounds

How Good is Your Approximation?

Whenever you approximate something you should be concerned about how good your approximation is. The error, E, of any approximation is defined to be the absolute value of the difference between the actual value and the approximation. If Tn(x) is the Taylor/Maclaurin approximation of degree n for a function f(x) then the error is E=\left| f\left( x \right)-{{T}_{n}}\left( x \right) \right|.  This post will discuss the two most common ways of getting a handle on the size of the error: the Alternating Series error bound and the Lagrange error bound.

Both methods give you a number B that will assure you that the approximation of the function at x={{x}_{0}} in the interval of convergence is within B units of the exact value. That is,

\left( f\left( {{x}_{0}} \right)-B \right)<{{T}_{n}}\left( {{x}_{0}} \right)<\left( f\left( {{x}_{0}} \right)+B \right)

or

{{T}_{n}}\left( {{x}_{0}} \right)\in \left( f\left( {{x}_{0}} \right)-B,\ f\left( {{x}_{0}} \right)+B \right).

Stop for a moment and consider what that means: f\left( {{x}_{0}} \right)-B and f\left( {{x}_{0}} \right)+B   are the endpoints of an interval around the actual value and the approximation will lie in this interval. Ideally, B is a small (positive) number.

Alternating Series

If a series \sum\limits_{n=1}^{\infty }{{{a}_{n}}} alternates signs, decreases in absolute value and \underset{n\to \infty }{\mathop{\lim }}\,\left| {{a}_{n}} \right|=0 then the series will converge. The terms of the partial sums of the series will jump back and forth around the value to which the series converges. That is, if one partial sums is larger than the value, the next will be smaller, and the next larger, etc. The error is the difference between any partial sum and the limiting value, but by adding an additional term the next partial sum will go past the actual value. Thus for a convergent alternating series the error is less than the absolute value of the first omitted term:

\displaystyle E=\left| \sum\limits_{k=1}^{\infty }{{{a}_{k}}}-\sum\limits_{k=1}^{n}{{{a}_{k}}} \right|<\left| {{a}_{n+1}} \right|.

Example: \sin (0.2)\approx (0.2)-\frac{{{(0.2)}^{3}}}{3!}=0.1986666667 The absolute value of the first omitted term is \left| \frac{{{(0.2)}^{5}}}{5!} \right|=0.26666\bar{6}\times {{10}^{-6}}. So our estimate should be between \sin (0.2)\pm 0.266666\times {{10}^{-6}} (that is, between 0.1986666641 and 0.1986719975), which it is. Of course, working with more complicated series, we usually do not know what the actual value is (or we wouldn’t be approximating). So an error bound like 0.26666\bar{6}\times {{10}^{-6}} assures us that our estimate is correct to at least 5 decimal places.

The Lagrange Error Bound

Taylor’s Theorem: If f is a function with derivatives through order n + 1 on an interval I containing a, then, for each x in I , there exists a number c between x and a such that

\displaystyle f\left( x \right)=\sum\limits_{k=1}^{n}{\frac{{{f}^{\left( k \right)}}\left( a \right)}{k!}{{\left( x-a \right)}^{k}}}+\frac{{{f}^{\left( n+1 \right)}}\left( c \right)}{\left( n+1 \right)!}{{\left( x-a \right)}^{n+1}}

The number \displaystyle R=\frac{{{f}^{\left( n+1 \right)}}\left( c \right)}{\left( n+1 \right)!}{{\left( x-a \right)}^{n+1}} is called the remainder. The equation above says that if you can find the correct c the function is exactly equal to Tn(x) + R. Notice the form of the remainder is the same as the other terms, except it is evaluated at the mysterious c.

The trouble is you cannot find the c without knowing the exact value; if we knew that, there would be no need to approximate.

Corollary – Lagrange Error Bound. 

\displaystyle \left| \frac{{{f}^{\left( n+1 \right)}}\left( c \right)}{\left( n+1 \right)!}{{\left( x-a \right)}^{n-1}} \right|\le \left( \text{max}\left| {{f}^{\left( n+1 \right)}}\left( x \right) \right| \right)\frac{{{\left| x-a \right|}^{n+1}}}{\left( n+1 \right)!}

The number \displaystyle \left( \text{max}\left| {{f}^{\left( n+1 \right)}}\left( x \right) \right| \right)\frac{{{\left| x-c \right|}^{n+1}}}{\left( n+1 \right)!}\ge \left| R \right| is called the Lagrange Error Bound. The expression \left( \text{max}\left| {{f}^{\left( n+1 \right)}}\left( x \right) \right| \right) means the maximum absolute value of the (n + 1) derivative on the interval between the value of x and c. The corollary says that this number is larger than the amount we need to add (or subtract) from our estimate to make it exact. This is the bound on the error. It requires us to, in effect, substitute the maximum value of the n + 1 derivative on the interval from a to x for {{f}^{(n+1)}}\left( x \right). This will give us a number equal to or larger than the remainder and hence a bound on the error.

Example: Using the same example sin(0.2) with 2 terms. The fifth derivative of \sin (x) is -\cos (x) so the Lagrange error bound is \displaystyle \left| -\cos (0.2) \right|\frac{\left| {{\left( 0.2-0 \right)}^{5}} \right|}{5!}, but if we know the cos(0.2) there are a lot easier ways to find the sine. This is a common problem, so we will pretend we don’t know cos(0.2), but whatever it is its absolute value is no more than 1. So the number \left( 1 \right)\frac{\left| {{\left( 0.2-0 \right)}^{5}} \right|}{5!}=2.6666\bar{6}\times {{10}^{-6}} will be larger than the Lagrange error bound and our estimate will be correct to at least 5 decimal places.

This “trick” is fairly common. If we cannot find the number we need, we can use a value that gives us a larger number and still get a good handle on the error in our approximation.

FYI: \displaystyle \left| -\cos (0.2) \right|\frac{\left| {{\left( 0.2-0 \right)}^{5}} \right|}{5!}\approx 2.61351\times {{10}^{-6}}

Corrected: February 3, 2015

Advertisements

3 thoughts on “Error Bounds

  1. Your notation is a little confusing.

    In the expression

    max |f(x)(n+1)| |x-a|^(n+1)

    the first occurence of “x” is bound by the max operator but the second occurence is free. You don’t indicate the scope of the max operator or the domain of the bound variable. In any event it is potentially confusing to use the same variable letter in an expression both as a free variable and a bound variable.

    Then later you substitute the constant cos(.2) into both occurences of “x”. But the first occurence is bound. You can’t substitute into a bound occurence of a variable.

    Like

  2. Pingback: Error Bounds | Welcome to Danh's World! ^_^

    • Jim

      I made a correction to the post to make clear that \left( \text{max}\left| {{f}^{\left( n+1 \right)}}\left( x \right) \right| \right) refers to the maximum of the absolute value of the (n + 1) derivative.

      As for the (0.2) substitution it was just for purposes of the example. In this example the maximum value of |-cos(x)| occurs at 0.2, but it is not necessary to know this, since, as usual, we will end up substituting a larger value, namely \cos \left( \tfrac{\pi }{2} \right)=1.

      Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s