Computing derivatives, as you saw in Chapter 4, rests on combination rules and working out some basic cases. For anti-derivatives the same is true, with" working out" replaced by "remembering". In other words, if you remember what the derivative of \(f\) is, then you know how to compute an anti-derivative of \(f'\text{.}\) This is how we computed anti-derivatives for polynomials, for example. The strategy is then: (1) list the derivatives we already know, organized in a way that allows us to query what function goes with a given derivative; and (2) give combining rules for anti-derivatives. This gives the following proposition. Note that in each case, remembering allows us to identify just one of the antiderivatives; we trust you can compute the others from that.
we use an integral sign without upper and lower limits to denote the antiderivative: e.g., \(\displaystyle\int (3x^2 + 1) \, dx\) is equal to \(x^3 + x\text{,}\) plus any constant. We usually write this as \(x^3 +
x + C\text{.}\) By custom, we don’t change the variable. In previous sections, for example, we were careful to write \(\displaystyle\int_0^b (3x^2 + 1) \, dx\) as a function of \(b\text{,}\) namely \(b^3 + b\text{.}\) But when writing the indefinite integral we tend to write \(\displaystyle\int (3x^2 + 1) \, dx = x^3 + x + C\text{,}\) not \(b^3 + b + C\text{.}\) This is because it’s shorthand for
The indefinite integral of the function \(x \mapsto 3x^2 + 1\) is any function \(x \mapsto x^3 + x + C\text{.}\)
Use Proposition 11.1 to compute this definite integral: \(\displaystyle\int_0^1 \frac{1}{1+x^2} \, dx\text{.}\) You will also need Proposition 10.27, which you should get used to using without even thinking of it as an extra step. Give an exact answer (not a decimal approximation).
The derivative of a sum or difference is the sum or difference of the derivatives. The derivative of \(c \cdot f\) is \(c\) times the derivative of \(f\) for any real constant \(c\text{.}\) This leads immediately to the following proposition.
Let \(F\) be an anti-derivative of \(f\) and \(G\) be an anti-derivative of \(G\text{.}\) Thens \((F+G)' = F' + G' = f + g\) therefore \((F+G)\) is an antiderivative of \(f+g\text{.}\)
The word "anti-derivative" is a mouthful and so is the verb form "anti-differntiate". Because computing integrals comes down to anti-differentiation, common practice is to use the verb integrate in place of "anti-differentiate". We also call an anti-derivative an "integral". Proposition 11.1 and Proposition 11.2 allow us to compute some more integrals.
One of your classmates argues that Example 11.3 is wrong: \(\displaystyle\int a \, dx = ax + C\) and \(\displaystyle\int \sec^2 (x) \, dx = \tan x + C\text{,}\) therefore the answer should be \(ax + \tan x + 2C\text{.}\) What is going on?
Example 11.3 should worry you. Does it seem a bit contrived? The expression \(\frac{a \cos x + b / \cos x}{\cos x}\) just happens to simplify into two expressions covered by the list of cases in Proposition 11.1. If that seems like a piece of luck, it is. With only Proposition 11.1 and Proposition 11.2 you won’t get very far. The next two sections give two rules for combining integrands that will greatly increase your ability to integrate. Keep in mind though, that in some sense you are still lucky whenever you can compute an analytic expression for an anti-derivative: many anti-derivatives have no nice formula.