Skip to main content

Section 8.4 Power Series

Notice that we formulated a lot of Sectionย 8.3 for a general normed space. So we can apply it to the normed space \(\mathcal{B}([a,b])\) of bounded real functions on an interval, equipped with the sup norm.
Why on earth would we do that?!, I can hear you asking. Consider what weโ€™ve seen so far:
  • Any continuous function can be uniformly approximated by a polynomial.
  • Any \(\mathcal{C}^{k+1}\) function can be approximated by a polynomial of degree \(k\text{,}\) with remainder controlled by the \(k+1\) derivative.
  • Polynomials sure are nice.
And, of course, polynomials are finite sums of monomials. So we might hope to understand the limits of such sums, i.e., series whose terms are monomials.

Definition 8.4.1.

The power series with coefficients \((\alpha_k)_{k\in\mathbb{N}}\) and center \(x_0\) is the series
\begin{equation*} \displaystyle \sum_{k=0}^\infty \alpha_k(x-x_0)^k\ \ \ . \end{equation*}
Notice that, for any interval \([a,b]\text{,}\) the power series is a series in \(\mathcal{B}([a,b])\text{.}\)

Proof.

Definition 8.4.3.

The number \(\frac{1}{\alpha}\) in Theoremย 8.4.2 is called the radius of convergence of the series \(\displaystyle \sum_{k=0}^\infty \alpha_k(x-x_0)^k\text{.}\)

Definition 8.4.4.

The derived series of the power series \(\displaystyle\sum_{k=0}^\infty a_k(x-x_0)^k\) is the series \(\displaystyle \sum_{k=0}^\infty (k+1)a_{k+1}(x-x_0)^k\text{.}\)

Checkpoint 8.4.5.

For the series \(1-x+\frac{1}{2}x^2-\frac{1}{3}x^3+\cdots\text{,}\) compute the derived series.
Why do we use the name "derived series"?

Proof.

Consider the limit
\begin{equation*} \displaystyle \limsup \sqrt[k]{\lvert (k+1)a_{k+1}\rvert}=\limsup \sqrt[k]{k+1} \sqrt[k]{\lvert a_{k+1}} \end{equation*}

Proof.

Weโ€™ll use the so-called log trick. Technically speaking, we havenโ€™t defined logarithms yet; letโ€™s fix that:
\begin{equation*} \log x = \int_{[1,x]}\frac{1}{t}\ dt \end{equation*}
Itโ€™s obvious that this function is continuous (thatโ€™s a fact we proved in class); and a quick argument shows that it satisfies the two nice properties:
  • \(\displaystyle \log(x\cdot y)=\log x + \log y\)
  • \(\displaystyle \log(x^p)=p\log x\)
The second property implies that any \(\epsilon\gt 0\text{,}\) there is \(K\) so that \(x\gt K\) guarantees \(\log x\lt \epsilon x\text{.}\)
The log trick is to compute a limit by taking its log, exploiting these algebraic properties, then delogging after the limit is obtained. Here, that works out as:
\begin{align*} \lim \log\left(\sqrt[k]{k+1}\right)&=\lim \frac{1}{k}\log(k+1)\\ &=0 \end{align*}
by the observation that \(\log x\) grows much more slowly than \(x\text{.}\) Therefore, \(\sqrt[k]{k+1}\) tends to a number \(L\) with \(\log L=0\text{.}\) The only such number is \(L=1\text{.}\)
Now this means that, given any \(\epsilon\gt 0\text{,}\) for large enough \(k\text{,}\) \(\sqrt[k]{\lvert a_{k+1}}-\epsilon\lt \sqrt[k]{\lvert (k+1)a_{k+1}\rvert}\lt \sqrt[k]{\lvert a_{k+1}}+\epsilon\text{,}\) so \(\limsup \sqrt[k]{\lvert (k+1)a_{k+1}\rvert}= \limsup \sqrt[k]{\lvert a_{k+1}\rvert}\text{.}\)
Now we need to compute \(\limsup \sqrt[k]{\lvert a_{k+1}\rvert}\text{.}\) First, observe that weโ€™re very nearly there, because:
\begin{equation*} \limsup \sqrt[k]{\lvert a_{k+1}\rvert}= \limsup \left(\sqrt[k+1]{\lvert a_{k+1}\rvert}\right)^{\frac{k+1}{k}} \end{equation*}
\(\frac{k+1}{k}\text{.}\)
\begin{equation*} \log\left(\sqrt[k+1]{\lvert a_{k+1}\rvert}\right)^{\frac{k+1}{k}}=\frac{k+1}{k} \log\sqrt[k+1]{\lvert a_{k+1}\rvert} \end{equation*}
\(\frac{k+1}{k}\to 1\text{,}\)
\begin{equation*} \limsup \sqrt[k]{\lvert (k+1)a_{k+1}\rvert}= \limsup \sqrt[k]{\lvert a_{k+1}\rvert}=\limsup \sqrt[k+1]{a_{k+1}}\ \ \ . \end{equation*}
Propositionย 8.4.6 ends up being quite remarkable. To see this, letโ€™s use \(\displaystyle S_N(x)=\displaystyle\sum_{k=0}^N a_k(x-x_0)^k\) to denote the sequence of functions whose limit is \(S(x)=\displaystyle\sum_{k=0}^\infty a_k(x-x_0)^k\text{.}\) Propositionย 8.4.6 says that on any closed interval where \(S_N\to S\text{,}\) in fact we have that the sequence of derivatives \(S_N'\) converges uniformly, say to something weโ€™ll call \(D\text{.}\) But then \(S'=D\text{;}\) in particular, \(S\) is differentiable!
It gets even better, though: we could consider the sequence \(S_N''\text{,}\) which again will have the same radius of convergence, hence converge to some \(D_2\) with \(S''=D_2\text{.}\) So \(S\) is twice-differentiable. And so on. Letโ€™s record this in the form of a theorem.
An easier statement is this: