# Infinite Sequences and Series – Unit 10

Unit 10 covers sequences and series. These are BC only topics (CED – 2019 p. 177 – 197). These topics account for about 17 – 18% of questions on the BC exam.

### Topics 10.1 – 10.2

Topic 10.1: Defining Convergent and Divergent Series.

Topic 10. 2: Working with Geometric Series. Including the formula for the sum of a convergent geometric series.

### Topics 10.3 – 10.9 Convergence Tests

The tests listed below are tested on the BC Calculus exam. Other methods are not tested. However, teachers may include additional methods.

Topic 10.3: The nth Term Test for Divergence.

Topic 10.4: Integral Test for Convergence. See Good Question 14

Topic 10.5: Harmonic Series and p-Series. Harmonic series and alternating harmonic series, p-series.

Topic 10.6: Comparison Tests for Convergence. Comparison test and the Limit Comparison Test

Topic 10.7: Alternating Series Test for Convergence.

Topic 10.8: Ratio Test for Convergence.

Topic 10.9: Determining Absolute and Conditional Convergence. Absolute convergence implies conditional convergence.

### Topics 10.10 – 10.12 Taylor Series and Error Bounds

Topic 10.10: Alternating Series Error Bound.

Topic 10.11: Finding Taylor Polynomial Approximations of a Function.

Topic 10.12: Lagrange Error Bound.

### Topics 10.13 – 10.15 Power Series

Topic 10.13: Radius and Interval of Convergence of a Power Series. The Ratio Test is used almost exclusively to find the radius of convergence. Term-by-term differentiation and integration of a power series gives a series with the same center and radius of convergence. The interval may be different at the endpoints.

Topic 10.14: Finding the Taylor and Maclaurin Series of a Function. Students should memorize the Maclaurin series for $\displaystyle \frac{1}{{1-x}}$, sin(x), cos(x), and ex.

Topic 10.15: Representing Functions as Power Series. Finding the power series of a function by, differentiation, integration, algebraic processes, substitution, or properties of geometric series.

### Timing

The suggested time for Unit 9 is about 17 – 18 BC classes of 40 – 50-minutes, this includes time for testing etc.

### Previous posts on these topics:

Before sequences

Amortization Using finite series to find your mortgage payment. (Suitable for pre-calculus as well as calculus)

A Lesson on Sequences An investigation, which could be used as early as Algebra 1, showing how irrational numbers are the limit of a sequence of approximations. Also, an introduction to the Completeness Axiom.

Everyday Series

Convergence Tests

Reference Chart

Which Convergence Test Should I Use? Part 1 Pretty much anyone you want!

Which Convergence Test Should I Use? Part 2 Specific hints and a discussion of the usefulness of absolute convergence

Good Question 14 on the Integral Test

Sequences and Series

Graphing Taylor Polynomials Graphing calculator hints

Introducing Power Series 1

Introducing Power Series 2

Introducing Power Series 3

New Series from Old 1 substitution (Be sure to look at example 3)

New Series from Old 2 Differentiation

New Series from Old 3 Series for rational functions using long division and geometric series

Geometric Series – Far Out An instructive “mistake.”

A Curiosity An unusual Maclaurin Series

Synthetic Summer Fun Synthetic division and calculus including finding the (finite)Taylor series of a polynomial.

Error Bounds

Error Bounds Error bounds in general and the alternating Series error bound, and the Lagrange error bound

The Lagrange Highway The Lagrange error bound.

What’s the “Best” Error Bound?

Review Notes

Type 10: Sequences and Series Questions

# What’s the “Best” Error Bound?

A know a lot of people like mathematics because there is only one answer, everything is exact. Alas, that’s not really the case. Numbers written as non-terminating decimals are not “exact;” they must be rounded or truncated somewhere. Even things like $\sqrt{7},\pi ,$  and 5/17 may look “exact,” but if you ever had to measure something to those values, you’re back to using decimal approximations.

There are many situations in mathematics where it is necessary to find and use approximations. Two if these that are usually considered in introductory calculus courses are approximating the value of a definite integral using the Trapezoidal Rule and Simpson’s Rule and approximating the value of a function using a Taylor or Maclaurin polynomial.

If you are using an approximation, you need and want to know how good it is; how much it differs from the actual (exact) value. Any good approximation technique comes with a way to do that. The Trapezoidal Rule and Simpson’s Rule both come with expressions for determining how close to the actual value they are. (Trapezoidal approximations, as opposed to the Trapezoidal Rule and Simpson’s Rule per se, are tested on the AP Calculus Exams. The error is not tested.) The error approximation using a Taylor or Maclaurin polynomial is tested on the exams.

The error is defined as the absolute value of the difference between the approximated value and the exact value. Since, if you know the exact value, there is no reason to approximate, finding the exact error is not practical. (And if you could find the exact error, you could use it to find the exact value.) What you can determine is a bound on the error; a way to say that the approximation is at most this far from the actual value. The BC Calculus exams test two ways of doing this, the Alternating Series Error Bound (ASEB) and the Lagrange Error Bound (LEB). These  two techniques are discussed in my previous post, Error Bounds. The expressions used below are discussed there.

Examining Some Error Bounds

We will look at an example and the various ways of computing an error bound. The example, which seems to come up this time every year, is to use the third-degree Maclaurin polynomial for sin(x) to approximate sin(0.1).

Using technology to twelve decimal places sin(0.1) = 0.099833416647

The Maclaurin (2n – 1)th-degree polynomial for sin(x) is

$\displaystyle x-\frac{1}{{3!}}{{x}^{3}}+\frac{1}{{5!}}{{x}^{5}}-+\cdots \frac{1}{{\left( {2n-1} \right)!}}{{x}^{{2n-1}}}$

So, using the third degree polynomial the approximation is

$\sin \left( {0.1} \right)\approx 0.1-\frac{1}{6}{{\left( {0.1} \right)}^{3}}=0.099833333333...$

The error to 12 decimal places is the difference between the approximation and the 12 place value. The error is:

$\displaystyle 0.00000008331349=8.331349\times {{10}^{{-8}}}=Error$

Using the Alternating Series Error Bound:

Since the series meets the hypotheses for the ASEB (alternating, decreasing in absolute value, and the limit of the nth term is zero), the error is less than the first omitted term. Here that is

$\displaystyle \frac{1}{{5!}}{{\left( {0.1} \right)}^{5}}\approx 0.0000000833333\approx 8.33333\times {{10}^{-8}}={{B}_{1}}$

The actual error is less than B1 as promised.

Using the Legrange Error Bound:

For the Lagrange Error Bound we must make a few choices. Nevertheless, each choice gives an error bound larger than the actual error, as it should.

For the third-degree Maclaurin polynomial, the LEB is given by

$\displaystyle \left| {\frac{{\max {{f}^{{(4)}}}\left( z \right)}}{{4!}}{{{(0.1)}}^{4}}} \right|$ for some number z between 0 and 0.1.

The fourth derivative of sin(x) is sin(x) and its maximum absolute value between 0 and 0.1 is |sin(0.1)|. So, the error bound is

$\displaystyle \left| {\frac{{\sin (0.1)}}{{4!}}{{{(0.1)}}^{4}}} \right|\approx 4.15973...\text{ }\!\!\times\!\!\text{ }{{10}^{-7}}={{B}_{2}}$

However, since we’re approximating sin(0.1) we really shouldn’t use it. In a different example, we probably won’t know it.

What to do?

The answer is to replace it with something larger. One choice is to use 0.1 since 0.1 > sin(0.1). This gives

$\displaystyle \left| {\frac{{0.1}}{{4!}}{{{(0.1)}}^{4}}} \right|\approx 4.166666666\times {{10}^{{-7}}}={{B}_{3}}$

The usual choice for sine and cosine situations is to replace the maximum of the derivative factor with 1 which is the largest value of the sine or cosine.

$\displaystyle \left| {\frac{1}{{4!}}{{{(0.1)}}^{4}}} \right|\approx 4.166666666\times {{10}^{{-6}}}={{B}_{4}}$

Since the 4th degree term is zero, the third-degree Maclaurin Polynomial is equal to the fourth-degree Maclaurin Polynomial. Therefore, we may use the fifth derivative in the error bound expression, $\displaystyle \left| {\frac{{\max {{f}^{{(5)}}}\left( z \right)}}{{5!}}{{{(0.1)}}^{5}}} \right|$ to calculate the error bound. The 5th derivative of the sin(x) is cos(x) and its maximum value in the range is cos(0) =1.

$\displaystyle \left| {\frac{{\cos (0)}}{{5!}}{{{(0.1)}}^{5}}} \right|\approx 8.33333333\times {{10}^{{-8}}}={{B}_{5}}$

I could go on ….

Since B1, B2, B3, B4, and B5 are all greater than the error, which should we use? Or should we use something else? Which is the “best”?

The error is what the error is. Fooling around with the error bound won’t change that. The error bound only assures you your approximation is, or is not, good enough for what you need it for. If you need more accuracy, you must use more terms, not fiddle with the error bound.

# 2019 CED Unit 10: Infinite Sequences and Series

Unit 10 covers sequences and series. These are BC only topics (CED – 2019 p. 177 – 197). These topics account for about 17 – 18% of questions on the BC exam.

### Timing

The suggested time for Unit 9 is about 17 – 18 BC classes of 40 – 50-minutes, this includes time for testing etc.

### Previous posts on these topics :

Introducing Power Series 1

# Power Series 2

This is a BC topic

Good Question 16 (11-30-2018) What you get when you substitute.

Geometric Series – Far Out (2-14-2017) A very interesting and instructive mistake

Synthetic Summer Fun (7-10-2017) Finding the Taylor series coefficients without differentiating

Error Bounds (2-22-2013) The alternating series error bound, and the Lagrange error bound

The Lagrange Highway (5-20-15) a metaphor for the error bound

REVIEW NOTES Type 10: Sequence and Series Questions (4-6-2018) A summary for reviewing sequences and series.

# More on Power Series

Continuing with post on sequences and series

New Series from Old 1 Rewriting using substitution

New Series from Old 2 Finding series by differentiating and integrating

New Series from Old 3  Rewriting rational expressions as geometric series

Geometric Series – Far Out A look at doing a question the right way and the “wrong” way?

Error Bounds The Alternating Series Error Bound and the Lagrange Error Bound

The Lagrange Highway An example explaining error bounds

Synthetic Summer Fun Using synthetic division, the Remainder Theorem, the Factor Theorem and finding the terms of a Taylor Series (Probably more than you want to know, but possibly an enrichment idea.)

# The Lagrange Highway

Recently, there was an interesting discussion on the AP Calculus Community discussion boards about the Lagrange error bound. You may link to it by clicking here. The replies by James L. Hartman and Daniel J. Teague were particularly enlightening and included files that you may download with the proof of Taylor’s Theorem (Hartman) and its geometric interpretation (Teague).

There are also two good Kahn Academy videos on Taylor’s theorem and the error bound on YouTube. The first part is here (11:26 minutes) and the second part is here (15:08 minutes).

I wrotean earlier blog post on the topic of error bounds on February 22, 2013, that you can find here.

Taylor’s Theorem says that

If f is a function with derivatives through order n + 1 on an interval I containing a, then, for each x in I , there exists a number c between x and a such that

$\displaystyle f\left( x \right)=\sum\limits_{k=1}^{n}{\frac{{{f}^{\left( k \right)}}\left( a \right)}{k!}{{\left( x-a \right)}^{k}}}+\frac{{{f}^{\left( n+1 \right)}}\left( c \right)}{\left( n+1 \right)!}{{\left( x-a \right)}^{n+1}}$

The number $\displaystyle R=\frac{{{f}^{\left( n+1 \right)}}\left( c \right)}{\left( n+1 \right)!}{{\left( x-a \right)}^{n+1}}$ is called the remainder.

The equation above says that if you can find the correct c the function is exactly equal to Tn(x) + R.

Tn(x) is called the n th  Taylor Approximating Polynomial. (TAP). Notice the form of the remainder is the same as the other terms, except it is evaluated at the mysterious c that we don’t know and usually are not able to find without knowing the value we are trying to approximate.

Lagrange Error Bound. (LEB)

$\displaystyle \left| \frac{{{f}^{\left( n+1 \right)}}\left( c \right)}{\left( n+1 \right)!}{{\left( x-a \right)}^{n-1}} \right|\le \left( \text{max}\left| {{f}^{\left( n+1 \right)}}\left( x \right) \right| \right)\frac{{{\left| x-a \right|}^{n+1}}}{\left( n+1 \right)!}$

The number $\displaystyle \left( \text{max}\left| {{f}^{\left( n+1 \right)}}\left( x \right) \right| \right)\frac{{{\left| x-c \right|}^{n+1}}}{\left( n+1 \right)!}\ge \left| R \right|$ is called the Lagrange Error Bound. The expression $\left( \text{max}\left| {{f}^{\left( n+1 \right)}}\left( x \right) \right| \right)$ means the maximum absolute value of the (n + 1) derivative on the interval between the value of x and c.

The LEB is then a positive number greater than the error in using the TAP to approximate the function f(x). In symbols $\left| {{T}_{n}}\left( x \right)-f\left( x \right) \right|.

Here is a little story that I hope will help your students understand what all this means.

Suppose you were tasked with building a road through the interval of convergence of a Taylor Series that the function could safely travel on. Here is how you could go about it.

Build the road so that the graph of the TAP is its center line. The edges of the road are built LEB units above and below the center line. (The width of the road is about twice the LEB.) Now when the function comes through the interval of convergence it will travel safely on the road. I will not necessarily go down the center but will not go over the edges. It may wander back and forth over the center line but will always stay on the road. Thus, you know where the function is; it is less than LEB units (vertically) from the center line, the TAP.

As shown in the example at the end of my previous post, it is often necessary to use a number larger than the minimum we could get away with for the LEB. This is because the maximum value of the derivative may be difficult to find. This amounts to building a road that is wider than necessary. The function will still remain within LEB units of the center line but will not come as close to the edges of our wider road as it may on the original road.  As long as the width of the wider road is less than the accuracy we need, this will not be a problem: the TAP will give an accurate enough approximation of the function.

# Error Bounds

Whenever you approximate something, you should be concerned about how good your approximation is. The error, E, of any approximation is defined to be the absolute value of the difference between the actual value and the approximation. If Tn(x) is the Taylor/Maclaurin approximation of degree n for a function f(x) then the error is $E=\left| f\left( x \right)-{{T}_{n}}\left( x \right) \right|$.  This post will discuss the two most common ways of getting a handle on the size of the error: the Alternating Series error bound, and the Lagrange error bound.

Both methods give you a number B that will assure you that the approximation of the function at $x={{x}_{0}}$ in the interval of convergence is within B units of the exact value. That is,

$\left( f\left( {{x}_{0}} \right)-B \right)<{{T}_{n}}\left( {{x}_{0}} \right)<\left( f\left( {{x}_{0}} \right)+B \right)$

or

${{T}_{n}}\left( {{x}_{0}} \right)\in \left( f\left( {{x}_{0}} \right)-B,\ f\left( {{x}_{0}} \right)+B \right)$.

Stop for a moment and consider what that means: $f\left( {{x}_{0}} \right)-B$ and $f\left( {{x}_{0}} \right)+B$   are the endpoints of an interval around the actual value and the approximation will lie in this interval. Ideally, B is a small (positive) number.

Alternating Series

If a series $\sum\limits_{n=1}^{\infty }{{{a}_{n}}}$ alternates signs, decreases in absolute value and $\underset{n\to \infty }{\mathop{\lim }}\,\left| {{a}_{n}} \right|=0$ then the series will converge. The terms of the partial sums of the series will jump back and forth around the value to which the series converges. That is, if one partial sum is larger than the value, the next will be smaller, and the next larger, etc. The error is the difference between any partial sum and the limiting value, but by adding an additional term the next partial sum will go past the actual value. Thus, for a series that meets the conditions of the alternating series test the error is less than the absolute value of the first omitted term:

$\displaystyle E=\left| \sum\limits_{k=1}^{\infty }{{{a}_{k}}}-\sum\limits_{k=1}^{n}{{{a}_{k}}} \right|<\left| {{a}_{n+1}} \right|$.

Example: $\sin (0.2)\approx (0.2)-\frac{{{(0.2)}^{3}}}{3!}=0.1986666667$ The absolute value of the first omitted term is $\left| \frac{{{(0.2)}^{5}}}{5!} \right|=0.26666\bar{6}\times {{10}^{-6}}$. So our estimate should be between $\sin (0.2)\pm 0.266666\times {{10}^{-6}}$ (that is, between 0.1986666641 and 0.1986719975), which it is. Of course, working with more complicated series, we usually do not know what the actual value is (or we wouldn’t be approximating). So an error bound like $0.26666\bar{6}\times {{10}^{-6}}$ assures us that our estimate is correct to at least 5 decimal places.

The Lagrange Error Bound

Taylor’s Theorem: If f is a function with derivatives through order n + 1 on an interval I containing a, then, for each x in I , there exists a number c between x and a such that

$\displaystyle f\left( x \right)=\sum\limits_{k=1}^{n}{\frac{{{f}^{\left( k \right)}}\left( a \right)}{k!}{{\left( x-a \right)}^{k}}}+\frac{{{f}^{\left( n+1 \right)}}\left( c \right)}{\left( n+1 \right)!}{{\left( x-a \right)}^{n+1}}$

The number $\displaystyle R=\frac{{{f}^{\left( n+1 \right)}}\left( c \right)}{\left( n+1 \right)!}{{\left( x-a \right)}^{n+1}}$ is called the remainder.

The equation above says that if you can find the correct c the function is exactly equal to Tn(x) + R. Notice the form of the remainder is the same as the other terms, except it is evaluated at the mysterious c. The trouble is we almost never can find the c without knowing the exact value of f(x), but; if we knew that, there would be no need to approximate. However, often without knowing the exact values of c, we can still approximate the value of the remainder and thereby, know how close the polynomial Tn(x) approximates the value of f(x) for values in x in the interval, i.

Corollary – Lagrange Error Bound.

$\displaystyle \left| \frac{{{f}^{\left( n+1 \right)}}\left( c \right)}{\left( n+1 \right)!}{{\left( x-a \right)}^{n+1}} \right|\le \left( \text{max}\left| {{f}^{\left( n+1 \right)}}\left( x \right) \right| \right)\frac{{{\left| x-a \right|}^{n+1}}}{\left( n+1 \right)!}$

The number $\displaystyle \left( \text{max}\left| {{f}^{\left( n+1 \right)}}\left( x \right) \right| \right)\frac{{{\left| x-c \right|}^{n+1}}}{\left( n+1 \right)!}\ge \left| R \right|$ is called the Lagrange Error Bound. The expression $\left( \text{max}\left| {{f}^{\left( n+1 \right)}}\left( x \right) \right| \right)$ means the maximum absolute value of the (n + 1) derivative on the interval between the value of x and c. The corollary says that this number is larger than the amount we need to add (or subtract) from our estimate to make it exact. This is the bound on the error. It requires us to, in effect, substitute the maximum value of the n + 1 derivative on the interval from a to x for ${{f}^{(n+1)}}\left( x \right)$. This will give us a number equal to or larger than the remainder and hence a bound on the error.

Example: Using the same example sin(0.2) with 2 terms. The fifth derivative of $\sin (x)$ is $-\cos (x)$ so the Lagrange error bound is $\displaystyle \left| -\cos (0.2) \right|\frac{\left| {{\left( 0.2-0 \right)}^{5}} \right|}{5!}$, but if we know the cos(0.2) there are a lot easier ways to find the sine. This is a common problem, so we will pretend we don’t know cos(0.2), but whatever it is its absolute value is no more than 1. So the number $\left( 1 \right)\frac{\left| {{\left( 0.2-0 \right)}^{5}} \right|}{5!}=2.6666\bar{6}\times {{10}^{-6}}$ will be larger than the Lagrange error bound, and our estimate will be correct to at least 5 decimal places.

This “trick” is fairly common. If we cannot find the number we need, we can use a value that gives us a larger number and still get a good handle on the error in our approximation.

FYI: $\displaystyle \left| -\cos (0.2) \right|\frac{\left| {{\left( 0.2-0 \right)}^{5}} \right|}{5!}\approx 2.61351\times {{10}^{-6}}$

Corrected: February 3, 2015, June 17, 2022