The other day, in the course of about 10 minutes, I came across two interesting things about Trapezoidal approximations that I thought I would share with you.

The first was a link to a story about how the ancient Babylonian astronomers sometime between 350 and 50 BCE used trapezoids to, in effect, find the area under a velocity-time graph tracking Jupiter’s motion. This was an NPR story based on a January 2016 Science magazine article in which the author, Mathieu Ossendrijver discusses his work deciphering cuneiform tablets written over 1,400 years before the technique showed up in Europe.

The second was a question asked on the AP Calculus bulletin board. A teacher asked, “Can someone please help me answer this question a student posed the other day. We were comparing left, right and midpoint rectangular area and trapezoidal approximations. He asked since the trapezoidal calculation is the best estimate what is the use of LRAM and RRAM?” Here is an expanded form of my answer.

There are several things to consider here.

- First, if all you need is an estimate of the area or integral of a continuous function then a Trapezoid sum is certainly better than the left Riemann sum (left-RΣ) or the right-RΣ. Better, yes, the “best” maybe not: midpoint sums are about as good and parabola sums (Simpson’s Rule) are better.

2. Another reason to do left RΣ and right RΣ with small values of *n* is simply to give students practice in setting up Riemann sums so that they will be familiar with them when they move on to finding their limits and getting ready to define definite integrals.

3. A RΣ for a function *f* on a closed interval [a, b] is formed by partitioning the interval into subintervals and taking exactly one function value from each closed subinterval, multiplying the value by the width of that subinterval and adding these results. You may pick the function value any way you want – left end, middle, right end, any place at random in the subintervals and someplace else in the next subinterval. One way is to pick the smallest function value in each subinterval; this gives a RΣ called the lower RΣ. Likewise, you could pick the largest value in each subinterval; this gives the upper RΣ. Now it is true that

lower RΣ ≤ (any/all other RΣs) ≤ upper RΣ

Then as you add more partition points (*n* approaches infinity, or Δ*x* approaches 0, etc.) the lower sum increases and the upper sum decreases. The series of lower sums is increasing and bounded above (by the upper sum) and therefore converges to its least upper bound. The upper sum decreases forming a decreasing series that is bounded below and therefore converges to its greatest lower bound.

If the lower RΣ and the upper RΣ approach the same value then ALL the other RΣs approach that same value by the *Squeeze theorem*. This value is then defined as the definite integral of *f* from a to b.

In most AP calculus course the textbooks do not deal with upper and lower sums. Instead they deal with left RΣ and right RΣ on intervals on which *f* is only increasing (or only decreasing). In this case the lower RΣ = left RΣ and the upper RΣ = the right RΣ (or the other way around for decreasing functions).

So this is why you need the left RΣ and right RΣ; not so much to approximate, but to complete the theory leading to the definite integral.