Computing a limit by verifying the formal definition is a real pain. There is computational apparatus that allows us to compute limits of many functions once we know limits of a few simple ones. One approach we have seen in textbooks is to give a list of rules that work. It looks something like this.
For all of the "limit rules", it is absolutely essential that every limit involved actually exists. And remember that a function having an infinite limit (i.e., \(\displaystyle \lim_{x\to a}f(x)=\pm\infty\)) is a flavor of the limit not existing. So none of these rules works if even one of the limits involved is infinite.
If \(f \) and \(g \) are functions and \(a,K,L \) are real numbers with \(\displaystyle\lim_{x \to a} f(x) = K \) and \(\displaystyle\lim_{x \to a} g(x) = L \text{,}\) then
Suppose \(f \) is a polynomial: \(f(x) = b_n x^n + \cdots + b_1 x + b_0 \text{.}\) What is \(\displaystyle\lim_{x \to a} f(x)\text{?}\) We hope you think this is a really boring example. Of course, the polynomial is continuous (picture in your mind the graph of a polynomial) and so we expect \(\displaystyle\lim_{x \to a} f(x) = f(a) \text{.}\) But why?
First note that we can evaluate the limit at \(a\) of the monomial \(x^k \) as \(a^k \) using the second conclusion of Proposition 2.28. We can evaluate the limit at \(a\) of each monomial \(b_k x^k \) as \(b_k a^k \) by applying the first conclusion of Proposition 2.28 with \(c = b_k \) and \(f(x) = x^k \text{.}\)
So that Proposition 2.28Proposition 2.29 don’t look like arbitrary rules from out of nowhere, you should realize they can be proved, and in fact follow from one basic theorem.
If the function \(f \) has a limit \(L \) at \(x=a \) and the function \(H \) is continuous at \(L \) then \(H \circ f \) will have the limit \(H(L) \) at \(x=a \text{.}\) Formally,
\begin{equation*}
\displaystyle\lim_{x \to a} f(x) = L \text{ implies } \lim_{x \to a} H(f(x)) = H(L)
\text{ provided } H \text{ is continuous at } L \, .
\end{equation*}
Why do the Proposition 2.28 and Proposition 2.29 follow from this principle? Let \(H(x) \) be the continuous function \(c x \text{.}\) Then \(H \circ f \) is \(c f(x) \) and we recover the first conclusion of Proposition 2.28. Setting \(H(x) := x^c \) recovers the second conclusion.
A related fact about limits is computation by change of variables. Suppose \(g\) is a function such that \(\displaystyle\lim_{x \to 0} g(x) = {-1} \text{.}\) What is \(\displaystyle\lim_{x \to 0} g(2x)\text{?}\)
Subsection2.4.1The Most Important Fact About Limits
Proposition 2.31 might lead you to think that \(\displaystyle\lim_{x\to a}f(x)\) just means \(f(a)\text{.}\) Nothing could be further from the truth! But let’s interrogate this issue a little bit.
First: if \(\displaystyle\lim_{x\to a}f(x)\) meant "evaluate \(f\) at \(a\)", we wouldn’t need a whole chapter on limits. Plugging in is easy and straightforward (that’s probably why lots of students want limits to be the same as plugging in). For that matter, if limits and evaluation were the same thing, we wouldn’t need two different terms!
yields the indeterminate form \(\frac{0}{0}\) if we try to evaluate at \(t=3\text{.}\) The power of limits is that even though this function is not well-behaved at \(t=3\), it is well-behaved near \(t=3\).
Generally speaking, you get an indeterminate form when two parts of the limitand are competing. The form \(\frac{\infty}{\infty}\) means the numerator and the denominator are both getting very large -- if the numerator gets larger faster then the limit will be more than 1; if the denominator gets larger faster then the limit will be less than 1. The fact that the initial form of the limit is \(\frac{\infty}{\infty}\) tells us nothing. Similarly, the form \(0\cdot \infty\) can be viewed as a competition between how fast the first factor is getting small and how fast the second factor is growing. The form itself only tells us this competition is happening -- not which part of the limitand wins it.
The algebraic rules for combining limits are so nice that we’d like to extend them to be able to handle infinite "limits". But this means we have to get into the arithmetic of infinity. The idea is to tack on to the real numbers, in addition the "numbers" \(+\infty, -\infty\) and UND (for undefined). These are the possible limits a function can have. The goal is to create combining rules for limits under the basic operations: addition, subtraction, multiplication, division and taking powers. One rule is that once something is undefined, it stays that way. Limits that DNE could turn out to be \(\pm \infty\) rather than UND, but once a limit gets classified as UND, nothing can be inferred about what you get when you add it to something, multiply it, etc. Thus, UND + 3, UND \(- \infty\text{,}\)\(-\infty \cdot\) UND and UND / UND are all undefined.
Aside
For the remaining situations, the definition is as follows.
If \(a, b\) and \(L\) are real numbers or \(\pm \infty\text{,}\) and \(\odot\) is an operation, we say \(a \odot b = L\) if for every \(f\) and \(g\) such that \(\displaystyle\lim_{x \to 0} f(x) = a\) and \(\displaystyle\lim_{x \to 0} g(x) = b\text{,}\) it is always true that \(\displaystyle\lim_{x \to 0} f(x) \odot g(x) = L\text{.}\)
The definition allows us to check if any guess is right. Let’s guess \(\infty + 3 = \infty\text{.}\) To check this, we check whether \(\displaystyle\lim_{x \to a} f(x) + g(x)\) is always equal to \(\infty\) when \(f\) and \(g\) are functions for which \(\displaystyle\lim_{x \to a} f(x) = \infty\) and \(\displaystyle\lim_{x \to a} g(x) = 3\text{.}\)
Indeed, if \(f(x)\) gets larger than any specified number \(M\) when \(x\) gets close to \(a\) and \(g(x)\) gets close to 3, then \(f(x) + g(x)\) will get larger than \(M+3\text{,}\) in other words, will get larger than any given number, which is the definition of the limit being \(+\infty\text{.}\)
Using Definition 2.33 as a guide, say what you think the value (possibly \(\pm \infty\) or UND) is for each of these three expressions. You don’t need a proof, just a guess.
Subsection2.4.3Two algebraic techniques for computing limits
This course is more about using limits than it is about computational technique, but you should at least see some of the standard techniques for cases that go beyond what’s in Proposition 2.28 and Proposition 2.29.
Suppose you need to evaluate \(\displaystyle\lim_{x \to a} f(x) / g(x) \text{.}\) If both \(f \) and \(g \) have nonzero limits at \(a \text{,}\) say \(L \) and \(M \text{,}\) then Proposition 2.29 tells you
In fact if \(L=0 \) but \(M \neq 0 \text{,}\) this still works. If \(M=0 \) but \(L\neq 0 \text{,}\) then the question of evaluating \(\displaystyle\lim_{x \to a} f(x) / g(x) \) also has an easy answer.
The remaining case, when \(L = M = 0 \text{,}\) can be enigmatic. Calculus provides one solution you will see in a few weeks (L’Hôpital’s Rule), but you can often solve this with algebra. If you can factor out \((x-a) \) from both \(f \) and \(g \text{,}\) you may get a simpler expression for which at least one of the functions has a nonzero limit.
Both numerator and denominator are continuous functions with values of zero (hence limits of zero) at \(x=5\text{.}\) That suggests dividing top and bottom by \(x-5 \text{,}\) resulting in \(\lim_{x \to 5} \frac{x + 5}{x} \text{.}\) Both numerator and denominator are continuous functions so we can just evaluate and get \(10 / 5 \) so the answer is 2.
What is \(\displaystyle\lim_{x \to 0} \frac{\sqrt{x+1} - 1}{x}\text{?}\) Multiplying and dividing by the so-called conjugate expression, where a sum is turned into a difference or vice versa, gives
This algebra trick occurs so commonly throughout mathematics that you should always think about conjugate radicals every time you see an expression with a square root added to or subtracted from something!
Further tricks can wait until you’ve learned some more background. Although limits are needed to define derivatives, you can then use derivatives to evaluate more limits (L’Hôpital’s Rule). Similarly, limits are used to define orders of growth, which can then be used to evaluate more limits.
It’s not at all obvious that these are indeterminate forms. Decide what values you think would be reasonable for the expressions \(0^0,\infty^0,1^\infty\text{.}\)
So: what should we do here? Because the problem seems to be coming from a power, we need an operation which converts powers to something else, something friendlier to algebra. Luckily, we have one: the logarithm. The function \(t\mapsto \ln(t)\) is continuous. That means we can appeal to Theorem 2.32 to get this little fact:
Proposition2.39.
\(\displaystyle \lim_{x\to c}\ln(f(x))=\ln\left(\lim_{x\to c}f(x)\right)\text{;}\) and if
To use this result, we have to convert our limit (by taking a log), do a limit computation, and then convert back out of log-world (how do we "de-log-convert"?).
Notice that the step from the problem to the first line is not claiming they are equal! We’re computing a limit related to the original limit, but we haven’t found the original limit yet. We know that:
For now, we’ll leave it there -- the log trick converts a problematic power to a problematic product. When we learn L’Hôpital’s Rule, those problematic products will become easier to handle.