Determining the Indeterminate

When determining the limit of an expression the first step is to substitute the value the independent variable approaches into the expression. If a real number results, you are all set; that number is the limit.

When you do not get a real number often expression is one of the several indeterminate forms listed below:

Indeterminate Forms:

\displaystyle \frac{0}{0},\quad \quad \frac{\pm \infty }{\pm \infty },\quad \quad 0\cdot \infty ,\quad \quad \infty -\infty ,\quad \quad {{0}^{0}},\quad \quad {{1}^{\infty }},\quad \quad {{\infty }^{0}}

They are called indeterminate forms because by doing some algebraic manipulation to simplify (or sometimes complicate) the expression its value can be determined. The calculus technique called L’Hôpital’s Rule may be used in some situations.

Part of the reason they are called indeterminate forms is that different expressions with the same indeterminate form may result in different values.

Before continuing with the discussion of indeterminate forms, I should point out that there are also determinate forms: expressions similar to those above that always result in the same value.

Determinate Forms and the values they approach:

\displaystyle \frac{1}{\pm \infty }\to 0,\quad \infty +\infty \to \infty ,\quad \infty \cdot \infty \to \infty ,\quad {{0}^{\infty }}\to 0,\quad \frac{1}{{{0}^{\infty }}}={{0}^{-\infty }}\to \infty

Returning to indeterminate forms, textbooks contain examples and exercises that illustrate how to evaluate the indeterminate forms. Here are two examples illustrating a few of the techniques that can be used to evaluate them.


Example 1:  \displaystyle \underset{x\to \infty }{\mathop{\lim }}\,{{\left( 1+\frac{1}{x} \right)}^{x}}=?.This is an example of the indeterminate form {{1}^{\infty }}. With exponents, logarithms may often be used to find the value.

\displaystyle \ln \left( ? \right)=\underset{x\to \infty }{\mathop{\lim }}\,\left( x\ln \left( 1+\frac{1}{x} \right) \right)=\underset{x\to \infty }{\mathop{\lim }}\,\frac{\ln \left( 1+\frac{1}{x} \right)}{\frac{1}{x}}

The limit is now of the indeterminate form \frac{0}{0}, so L’Hôpital’s Rule may be used. Continuing

\displaystyle \ln \left( ? \right)=\underset{x\to \infty }{\mathop{\lim }}\,\left( x\ln \left( 1+\frac{1}{x} \right) \right)=\underset{x\to \infty }{\mathop{\lim }}\,\frac{\ln \left( 1+\frac{1}{x} \right)}{\frac{1}{x}}=\underset{x\to \infty }{\mathop{\lim }}\,\frac{\left( \frac{1}{1+\frac{1}{x}} \right)\left( -{{x}^{-2}} \right)}{-{{x}^{-2}}}=\underset{x\to \infty }{\mathop{\lim }}\,\left( \frac{1}{1+\frac{1}{x}} \right)=1

So then ln(?) = 1 and ? = e1 = e and therefore \displaystyle \underset{x\to \infty }{\mathop{\lim }}\,{{\left( 1+\frac{1}{x} \right)}^{x}}=e


Example 2: \displaystyle \underset{x\to 0}{\mathop{\lim }}\,\left( \frac{\sin \left( x+\frac{\pi }{6} \right)}{x}-\frac{\sin \left( \frac{\pi }{6} \right)}{x} \right) is an example of the indeterminate form \infty -\infty .

Write the expression with a common denominator:

\displaystyle \underset{x\to 0}{\mathop{\lim }}\,\left( \frac{\sin \left( x+\frac{\pi }{6} \right)}{x}-\frac{\sin \left( \frac{\pi }{6} \right)}{x} \right)=\underset{x\to 0}{\mathop{\lim }}\,\left( \frac{\sin \left( \frac{\pi }{6}+x \right)-\sin \left( \frac{\pi }{6} \right)}{x} \right)=\cos \left( \frac{\pi }{6} \right)=\frac{\sqrt{3}}{2}

The second limit above is the definition of the derivative of \sin \left( \frac{\pi }{6} \right). (L’Hôpital’s Rule may also be used with the second limit above.)

In fact, the definition of the derivative of all (any, every) functions, \displaystyle {f}'\left( a \right)=\underset{h\to 0}{\mathop{\lim }}\,\frac{f\left( a+h \right)-f\left( a \right)}{h}, gives indeterminate expressions of the form \frac{0}{0}.


.

The Derivatives of Exponential Functions

Our problem for today is to differentiate ax with the (usual) restrictions that a is a positive number and not equal to 1. The reasoning here is very different from that for finding other derivatives and therefore I hope you and your students find it interesting.

The definition of derivative followed by a little algebra gives tells us that

\displaystyle \frac{d}{dx}{{a}^{x}}=\underset{h\to 0}{\mathop{\lim }}\,\frac{{{a}^{x+h}}-{{a}^{x}}}{h}=\underset{h\to 0}{\mathop{\lim }}\,\frac{{{a}^{x}}{{a}^{h}}-{{a}^{x}}}{h}={{a}^{x}}\underset{h\to 0}{\mathop{\lim }}\,\frac{{{a}^{h}}-1}{h}.

Since the limit in the expression above is a number, we observe that the derivative of ax is proportional to ax. And also, each value of a gives a different constant. For example if a = 5 then the limit is approximately 1.609438, and so \displaystyle \frac{d}{{dx}}{{5}^{x}}\approx \left( {1.609438} \right){{5}^{x}}.

I determined this by producing a table of values for the expression in the limit near x = 0. You can do the same using a good calculator, computer, or a spreadsheet.

          h                            \frac{{{5}^{h}}-1}{h}

-0.00000030            1.60943752

-0.00000020            1.60943765

-0.00000010             1.60943778

0.00000000             undefined

0.00000010             1.60943804

0.00000020             1.60943817

0.00000030             1.60943830

That’s kind of messy and would require us to find this limit for whatever value of a we were using. It turns out that by finding the value of a for which the limit is 1 we can fix this problem. Your students can do this for themselves by changing the value of a in their table until they get the number that gives a limit of 1.

Okay, that’s going to take a while, but may be challenging. The answer turns out to be close to 2.718281828459045…. Below is the table for this number.

          h                            \frac{{{a}^{h}}-1}{h}

-0.00000030            0.99999985

-0.00000020            0.99999990

-0.00000010            0.99999995

0.00000000            undefined

0.00000010             1.00000005

0.00000020             1.00000010

0.00000030             1.00000015

Okay, I cheated. The number is, of course, e. Thus,

\displaystyle \frac{d}{{dx}}{{e}^{x}}={{e}^{x}}\left( {\underset{{h\to 0}}{\mathop{{\lim }}}\,\frac{{{{e}^{h}}-1}}{h}} \right)={{e}^{x}}(1)={{e}^{x}}.

The function ex is its own derivative!

And from this we can find the derivatives of all the other exponential functions. First, we define a new function (well maybe not so new) which is the inverse of the function ex called ln(x), the natural logarithm of x. (For more on this see Logarithms.) Then a = eln(a) and ax = (eln(a))x = e(ln(a)x). Then using the Chain Rule, the derivative is

\frac{d}{dx}{{a}^{x}}={{e}^{(\ln (a))x}}\ln (a)={{\left( {{e}^{\ln (a)}} \right)}^{x}}\ln (a)

\frac{d}{dx}{{a}^{x}}={{a}^{x}}\ln \left( a \right)

Finally, going back to the first table above where a = 5, we find that the limit we found there 1.609438 = ln(5).

For a video on this topic click here.


Revised 8-28-2018, 6-2-2019

Open or Closed?

About this time of year you find someone, hopefully one of your students, asking, “If I’m finding where a function is increasing, is the interval open or closed?”

Do you have an answer?

This is a good time to teach some things about definitions and theorems.

The place to start is to ask what it means for a function to be increasing. Here is the definition:

A function is increasing on an interval if, and only if, for all (any, every) pairs of numbers x1 < x2 in the interval, f(x1) < f(x2).

(For decreasing on an interval, the second inequality changes to f(x1) > f(x2). All of what follows applies to decreasing with obvious changes in the wording.)

  1. Notice that functions increase or decrease on intervals, not at individual points. We will come back to this in a minute.
  2. Numerically, this means that for every possible pair of points, the one with the larger x-value always produces a larger function value.
  3. Graphically, this means that as you move to the right along the graph, the graph is going up.
  4. Analytically, this means that we can prove the inequality in the definition.

For an example of this last point consider the function f(x) = x2. Let x2 = x1 + h where h > 0. Then in order for  f(x1) < f(x2) it must be true that

{{x}_{1}}^{2}<{{\left( {{x}_{1}}+h \right)}^{2}}
0<{{\left( {{x}_{1}}+h \right)}^{2}}-{{x}_{1}}^{2}
0<{{x}_{1}}^{2}+2h{{x}_{1}}+{{h}^{2}}-{{x}_{1}}^{2}
0<h\left( 2{{x}_{1}}+h \right)

This can only be true if {{x}_{1}}\ge 0, Thus, x2 is increasing only if x\ge 0.

Now, of course, we rarely, if ever, go to all that trouble. And it is even more trouble for a function that increases on several intervals.  The usual way of finding where a function is increasing is to look at its derivative.

Notice that the expression {{\left( {{x}_{1}}+h \right)}^{2}}-{{x}_{1}}^{2} looks a lot like the numerator of the original limit definition of the derivative of x2 at x = x1, namely \displaystyle {f}'\left( {{x}_{1}} \right)=\underset{h\to 0}{\mathop{\lim }}\,\frac{{{\left( {{x}_{1}}+h \right)}^{2}}-{{x}_{1}}^{2}}{h}. If h > 0, where the function is increasing the numerator is positive and the derivative is positive also. Turning this around we have a theorem that says, If {f}'\left( x \right)>0 for all x in an interval, then the function is increasing on the interval. That makes it much easier to find where a function is increasing: we simplify find where its derivative is positive.

There is only a slight problem in that the theorem does not say what happens if the derivative is zero somewhere on the interval. If that is the case, we must go back to the definition of increasing on an interval or use some other method. For example, the function x3 is increasing everywhere, even though its derivative at the origin is zero.

Let’s consider another example. The function sin(x) is increasing on the interval \left[ -\tfrac{\pi }{2},\tfrac{\pi }{2} \right] (among others) and decreasing on \left[ \tfrac{\pi }{2},\tfrac{3\pi }{2} \right]. It bothers some that \tfrac{\pi }{2} is in both intervals and that the derivative of the function is zero at x = \tfrac{\pi }{2}. This is not a problem. Sin(\tfrac{\pi }{2}) is larger than all the other values is both intervals, so by the definition, and not the theorem, the intervals are correct.

It is generally true that if a function is continuous on the closed interval [a,b] and increasing on the open interval (a,b) then it must be increasing on the closed interval [a,b] as well. (There is a proof by Lou Talman of this fact click here .)

Returning to the first point above: functions increase or decrease on intervals not at points. You do find questions in books and on tests that ask, “Is the function increasing at x = a.” The best answer is to humor them and answer depending on the value of the derivative at that point. Since the derivative is a limit as h approaches zero, the function must be defined on some interval around x = a in which h is approaching zero. So answer according to the value of the derivative on that interval.

You can find more on this here.

Case Closed.

Why Radians?

Calculus is always done in radian measure. Degree (a right angle is 90 degrees) and gradian measure (a right angle is 100 grads) have their uses. Outside of the calculus they may be easier to use than radians. However, they are somewhat arbitrary. Why 90 or 100 for a right angle? Why not 10 or 217?

Radians make it possible to relate a linear measure and an angle measure. A unit circle is a circle whose radius is one unit. The one-unit radius is the same as one unit along the circumference. Wrap a number line counterclockwise around a unit circle starting with zero at (1, 0). The length of the arc subtended by the central angle becomes the radian measure of the angle.

This keeps all the important numbers like the sine and cosine of the central angle, on the same scale. When you graph y = sin(x) one unit in the x-direction is the same as one unit in the y-direction. When graphing using degrees, the vertical scale must be stretched a lot to even see that the graph goes up and down. Try graphing on a calculator y = sin(x) in degree mode in a square window and you will see what I mean.

But the utility of radian measure is even more obvious in calculus. To develop the derivative of the sine function you first work with this inequality (At the request of a reader I have added an explanation of this inequality at the end of the post):

\displaystyle \frac{1}{2}\cos \left( \theta \right)\sin \left( \theta \right)\le \frac{1}{2}\theta \le \frac{1}{2}\tan \left( \theta \right)

From this inequality you determine that \displaystyle \underset{\theta \to 0}{\mathop{\lim }}\,\frac{\sin \left( \theta \right)}{\theta }=1

The middle term of the inequality is the area of a sector of a unit circle with central angles of \theta radians. If you work in degrees, this sector’s area is \displaystyle \frac{\pi }{360}\theta  and you will find that \displaystyle \underset{\theta \to 0}{\mathop{\lim }}\,\frac{\sin \left( \theta \right)}{\theta }=\frac{\pi }{180}.

This limit is used to find the derivative of the sin(x). Thus, with x in degrees, \displaystyle \frac{d}{dx}\sin \left( x \right)=\frac{\pi }{180}\cos \left( x \right). This means that with the derivative or antiderivative of any trigonometric function that \displaystyle \frac{\pi }{180} is there getting in the way.

Who needs that?

Do your calculus in radians.


Revision December 7, 2014: The inequality above is derived this way. Consider the unit circle shown below.

unit circle

1. The central angle is \theta  and the coordinates of A are \left( \cos (\theta ),\sin (\theta ) \right).

Then the area of triangle OAB is \frac{1}{2}\cos \left( \theta\right)\sin \left( \theta\right)

2. The area of sector OAD=\frac{\theta}{2\pi }\pi {{\left( 1 \right)}^{2}}=\frac{1}{2}\theta . The sector’s area is larger than the area of triangle OAB.

3. By similar triangles \displaystyle \frac{AB}{OB}=\frac{\sin \left( \theta\right)}{\cos \left( \theta\right)}=\tan \left( \theta\right)=\frac{CD}{1}=CD.

Then the area of \Delta OCD=\frac{1}{2}CD\cdot OD=\frac{1}{2}\tan \left( \theta \right) This is larger than the area of the sector, which establishes the inequality above.

Multiply the inequality by \displaystyle \frac{2}{\sin \left( \theta \right)} and take the reciprocal to obtain \displaystyle \frac{1}{\cos \left( \theta \right)}\ge \frac{\sin \left( \theta \right)}{\theta }\ge \cos \left( \theta \right).

Finally, take the limit of these expression as \theta \to 0 and the limit \displaystyle \underset{\theta \to 0}{\mathop{\lim }}\,\frac{\sin \left( \theta \right)}{\theta }=1 is established by the squeeze theorem.

Difference Quotients II

The Symmetric Difference Quotient

In the last post we defined the Forward Difference Quotient (FDQ) and the Backward Difference Quotient (BDQ). The average of the FDQ and the BDQ is called the Symmetric Difference Quotient (SDQ):

\displaystyle \frac{f\left( x+h \right)-f\left( x-h \right)}{2h}

You may be forgiven if you think this might be a better expression to use to find the derivative. It has its advantages. In fact, this is the expression used in many calculators to compute the numerical value of the derivative at a point; in calculators it is called nDeriv. Usually, it works pretty well. But if you try to find the derivative of the absolute value of x at x = 0 it will tell you the derivative is 0, which is wrong. The absolute value function is not locally linear at the origin and has no derivative there.

What went wrong?  Read the expression above. The numerator is the difference of the function values at the same distance, h, on both sides of x. Since, for the absolute value function with x = 0, these values are the same, their difference is 0. The SDQ never looks at x = 0 and doesn’t realize there is no derivative there. Thus, the limit of the SDQ is not the derivative.

This problem does not occur with the definition of derivative, since for that limit to exist the limits as h approaches zero from both sides must be equal. For the absolute value function the limit from the left is –1 and the limit from the right is +1 and therefore there is no limit and no derivative there.

Since most functions we will consider are differentiable, most of the time the SDQ and nDeriv are okay to use.

Seeing Difference Quotients Converge

This is an activity to see difference quotients graphically. Use a graphing calculator or a graphing program on a computer. One with a slider feature is better although I’ll also tell you how to use a calculator without this feature.

  1. Enter the function you want to consider as Y1 in your calculator or give it a name if you are using a computer. This is so later you can change the function without having to re-enter the next three equations.
  2. Enter the FDQ as Y2 using Y1 as the function. See Figure 1 below.
  3. Enter the BDQ as Y3 again using Y1 as the function.
  4. Enter the SDQ as Y3 again using Y1 as the function.
  5. Either set up a slider for h or go to the home screen and store a value for h. In the latter case you will have to return to the home screen and change the values.

Now graph all four functions. As you change the values of h with the slider or from the home screen, you should see three similar graphs (the difference quotients) along with the first function you entered. As h approaches zero, the three similar graphs should come together (converge) on the graph of the derivative. See Figures 2 and 3 below.

Change the first function. Some good functions to try are y = x– 4x, y = x3/3, y = sin(x) and don’t forget y = |x|. Try guessing the equation of the derivative.

Figure 2 Shows y = x3/3 in Black with the three difference quotients, h is about 2.

Figure 3 shows the same graph with h almost 0; the three difference quotients, now almost on top of each other, are closing in on the derivative.

Here is a link to a Desmos demonstration of the three difference quotients

Difference Quotients I

Difference Quotients & Definition of the Derivative

In the second posting on Local Linearity II, we saw that what we were doing, finding the slope to a nearby point, looked like this symbolically:

\displaystyle \frac{f\left( x+h \right)-f\left( x \right)}{h}

This expression is called the Forward Difference Quotient (FDQ). It kind of assumes that h > 0.

There is also the Backwards Difference Quotient (BDQ):

\displaystyle \frac{f\left( x \right)-f\left( x-h \right)}{h}=\frac{f\left( x-h \right)-f\left( x \right)}{-h}

The BDQ also kind of assumes that h > 0. If h < 0 then the FDQ becomes the BDQ and vice versa. So these are really the same thing. The limit (if it exists) as h approaches zero is the slope of the tangent line at whatever x is and this is important enough to have its own name. It is called the derivative of f at x with the notation (among others) {f}'\left( x \right) :

\displaystyle {f}'\left( x \right)=\underset{h\to 0}{\mathop{\lim }}\,\frac{f\left( x+h \right)-f\left( x \right)}{h}

Since h must approach 0 from both sides, this expression incorporates the FDQ and the BDQ in one expression.

To emphasize that h is a “change in x” this limit is often written

\displaystyle {f}'\left( x \right)=\underset{\Delta x\to 0}{\mathop{\lim }}\,\frac{f\left( x+\Delta x \right)-f\left( x \right)}{\Delta x}