Assuming a solution of a differential equation is a power series, we can perhaps use a method reminiscent of undetermined coefficients—we try to solve for the numbers \(a_k\text{.}\) Before we carry out this process, we review some results and concepts about power series.
exists, we say the series (7.1)converges at \(x\text{.}\) At \(x=x_0\text{,}\) the series always converges to \(a_0\text{.}\) When (7.1) converges at any other \(x \not= x_0\text{,}\) we say (7.1) is a convergent power series, and we write
is convergent for any \(x\text{.}\) Recall that \(k! = 1\cdot 2\cdot 3 \cdots k\) is the factorial. By convention we define \(0! = 1\text{.}\) You may recall that this series converges to \(e^x\text{.}\)
exists. That is, the series \(\sum_{k=0}^\infty \lvert a_k \rvert \, {\lvert x-x_0 \rvert}^k\) is convergent. If (7.1) converges absolutely at \(x\text{,}\) then it converges at \(x\text{.}\) However, the opposite implication is not true.
converges absolutely for all \(x\) in the interval \((-1,1)\text{.}\) It converges at \(x=-1\text{,}\) as \(\sum_{k=1}^\infty \frac{{(-1)}^k}{k}\) converges (conditionally) by the alternating series test. The power series does not converge absolutely at \(x=-1\text{,}\) because \(\sum_{k=1}^\infty \frac{1}{k}\) does not converge. The series diverges at \(x=1\text{.}\)
If a power series converges absolutely at some \(x_1\text{,}\) then for all \(x\) such that \(\lvert x - x_0 \rvert \leq \lvert x_1 - x_0 \rvert\) (that is, \(x\) is closer than \(x_1\) to \(x_0\)), we have \(\lvert a_k {(x-x_0)}^k \rvert \leq
\lvert a_k {(x_1-x_0)}^k \rvert\) for all \(k\text{.}\) As the numbers \(\lvert a_k {(x_1-x_0)}^k \rvert\) sum to some finite limit, summing smaller positive numbers \(\lvert a_k {(x-x_0)}^k \rvert\) must also have a finite limit. Hence, the series must converge absolutely at \(x\text{.}\)
For a power series (7.1), there exists a number \(\rho\) (we allow \(\rho=\infty\)) called the radius of convergence such that the series converges absolutely on the interval \((x_0-\rho,x_0+\rho)\) and diverges for \(x < x_0-\rho\) and \(x > x_0+\rho\text{.}\) We write \(\rho=\infty\) if the series converges for all \(x\text{.}\)
See Figure 7.1. In Example 7.1.1, the radius of convergence is \(\rho = \infty\) as the series converges everywhere. In Example 7.1.2, the radius of convergence is \(\rho=1\text{.}\) We note that \(\rho = 0\) is another way of saying that the series is divergent.
Then the series (7.1) converges absolutely if \(1 > L = A \lvert x - x_0 \rvert\text{.}\) If \(A > 0\text{,}\) then the series converges absolutely if \(\lvert x - x_0 \rvert < \nicefrac{1}{A}\text{,}\) and diverges if \(\lvert x - x_0 \rvert > \nicefrac{1}{A}\text{.}\) That is, the radius of convergence is \(\nicefrac{1}{A}\text{.}\) If \(A = 0\text{,}\) then the series always converges.
\begin{equation}
L = \lim_{k\to\infty} \sqrt[k]{\lvert c_k \rvert}
\end{equation}
exists. Then \(\sum_{k=0}^\infty c_k\) converges absolutely if \(L < 1\) and diverges if \(L > 1\text{.}\) We can use the same calculation as above to find \(A\text{.}\) Let us summarize.
\begin{equation}
A =
\lim_{k\to\infty}
\left \lvert
\frac{a_{k+1}}{a_k}
\right \rvert
\qquad \text{or} \qquad
A =
\lim_{k\to\infty} \sqrt[k]{\lvert a_k \rvert}
\end{equation}
exists. If \(A = 0\text{,}\) then the radius of convergence of the series is \(\infty\text{.}\) Otherwise, the radius of convergence is \(\nicefrac{1}{A}\text{.}\) Moreover, if either limit for \(A\) diverges to \(\infty\text{,}\) then the series is divergent.
Therefore, the radius of convergence is \(2\text{,}\) and the series converges absolutely on the interval \((-1,3)\text{.}\) We could just as well have used the root test:
The root or the ratio test as given does not always apply. That is, the limit of \(\bigl \lvert \frac{a_{k+1}}{a_k} \bigr \rvert\) or \(\sqrt[k]{\lvert a_k \rvert}\) might not exist. Sometimes the test must be applied to the series itself—to find the \(L\) rather than the \(A\text{.}\) For example, for the series \(\sum_{k=1}^\infty 2^k x^{2k}\text{,}\) we cannot apply the root with the \(A\text{,}\) we must work with the limit of \(\sqrt[k]{\lvert 2^k x^{2k} \rvert}\text{,}\) which equals \(L = 2 {\lvert x \rvert}^2\text{.}\) This \(L\) is less than \(1\) when \(\lvert x \rvert < \frac{1}{\sqrt{2}}\) and so \(\frac{1}{\sqrt{2}}\) is the radius of convergence. Other ways exist to find the radius, but ratio and root tests cover many series arising in practice.
At the endpoints, \(x = x_0-\rho\) and \(x = x_0+\rho\text{,}\) the series may or may not converge, and the tests above say nothing about convergence there. Sometimes convergence at the endpoints is important, but for our purposes, we will not worry about it much.
Functions represented by power series are called analytic functions. Not all functions are analytic, but the functions you have seen in calculus likely all are. An analytic function \(f(x)\) equals its Taylor series 1
Named after the English mathematician Sir Brook Taylor (1685–1731).
(a power series computed from \(f\)) for \(x\) near a given point \(x_0\text{:}\)
In Figure 7.2, we plot \(\sin(x)\) and the truncations of the series up to degree 5 and 9. You can see that the approximation is very good for \(x\) near 0, but gets worse for \(x\) further away from 0. This is what happens in general. To get a good approximation far away from \(x_0\) you need to take more and more terms of the Taylor series.
One of the main properties of power series that we will use is that we can differentiate them term by term. That is, suppose that \(\sum a_k {(x-x_0)}^k\) is a convergent power series. Then for \(x\) in the radius of convergence, we have
Notice that the term corresponding to \(k=0\) disappeared as it was constant. The radius of convergence of the differentiated series is the same as that of the original.
We reindex the series by simply replacing \(k\) with \(k+1\text{.}\) The series does not change, what changes is simply how we write it. After reindexing the series starts at \(k=0\) again.
Convergent power series can be added and multiplied together, and multiplied by constants using the following rules. First, we can add series by adding term by term,
where \(c_k = a_0b_k + a_1 b_{k-1} + \cdots + a_k b_0\text{.}\) The radius of convergence of the sum or the product is at least the minimum of the radii of convergence of the two series involved.
Subsection7.1.5Power series for rational functions
Polynomials are simply finite power series. That is, a polynomial is a power series where the \(a_k\) are zero for all \(k\) large enough. We can always expand a polynomial as a power series about any point \(x_0\) by writing the polynomial as a polynomial in \((x-x_0)\text{.}\) For example, let us write \(2x^2-3x+4\) as a power series around \(x_0 = 1\text{:}\)
In other words, \(a_0 = 3\text{,}\)\(a_1 = 1\text{,}\)\(a_2 = 2\text{,}\) and all other \(a_k = 0\text{.}\) To do this, we know that \(a_k = 0\) for all \(k \geq 3\) as the polynomial is of degree 2. We write \(a_0 + a_1(x-1) + a_2{(x-1)}^2\text{,}\) we expand, and we solve for \(a_0\text{,}\)\(a_1\text{,}\) and \(a_2\text{.}\) We could have also differentiated at \(x=1\) and used the Taylor series formula (7.2).
Let us look at rational functions, that is, ratios of polynomials. An important fact is that a series for a function only defines the function on an interval even if the function is defined elsewhere. For example, for \(-1 < x < 1\text{,}\)
This series is called the geometric series. The ratio test tells us that the radius of convergence is \(1\text{.}\) The series diverges for \(x \leq -1\) and \(x \geq 1\text{,}\) even though \(\frac{1}{1-x}\) is defined for all \(x \not= 1\text{.}\)
Instead of applying the Taylor series expansion (7.2), the geometric series together with rules for addition and multiplication of power series can be used to expand any rational function around a point \(x_0\text{,}\) as long as the denominator is not zero at \(x_0\text{.}\)
where to get \(c_k\text{,}\) we use the formula for the product of series: \(c_0 = 1\text{,}\)\(c_1 = -1 -1 = -2\text{,}\)\(c_2 = 1+1+1 = 3\text{,}\) etc. Therefore
\begin{equation}
\frac{x}{1+2x+x^2}
=
\sum_{k=1}^\infty {(-1)}^{k+1} k x^k
= x-2x^2+3x^3-4x^4+\cdots
\end{equation}
The radius of convergence is at least 1. We use the ratio test
When the rational function is more complicated, it is also possible to use the method of partial fractions. For example, to find the Taylor series for \(\frac{x^3+x}{x^2-1}\text{,}\) we write
\begin{equation}
\frac{x^3+x}{x^2-1}
=
x + \frac{1}{1+x} - \frac{1}{1-x}
=
x + \sum_{k=0}^\infty {(-1)}^k x^k - \sum_{k=0}^\infty x^k
=
- x + \sum_{\substack{k=3 \\ k \text{ odd}}}^\infty (-2) x^k .
\end{equation}
Determine the Taylor series and its radius of convergence of \(\dfrac{x}{4-x^2}\) around \(x_0 = 0\text{.}\) Hint: You will not be able to use the ratio test.
Suppose that the ratio test applies to a series \(\displaystyle \sum_{k=0}^\infty a_k x^k\text{.}\) Show, using the ratio test, that the radius of convergence of the differentiated series is the same as that of the original series.
\(\frac{1}{1-x} = -\frac{1}{1-(2-x)}\) so \(\frac{1}{1-x} =
\sum\limits_{n=0}^\infty {(-1)}^{n+1} {(x-2)}^n\text{,}\) which converges for \(1 < x < 3\text{.}\)
(challenging) Imagine \(f\) and \(g\) are analytic functions such that \(f^{(k)}(0) = g^{(k)}(0)\) for all large enough \(k\text{.}\) What can you say about \(f(x)-g(x)\text{?}\)
a) \(\displaystyle \sum_{k=6}^\infty (k-3)(k-4)x^{k}\) b) \(\displaystyle \sum_{k=0}^\infty (k+2)x^{k}\) c) \(\displaystyle \sum_{k=3}^\infty 2(k+2)x^{k}\)