Section 2.1 Power series
Subsection 2.1.1 Motivation
One of the limitations of the elementary techniques of differential equations is the extremely limited number of types of equations that can be solved. Once we are dealing with equations that are second order or higher, we are pretty much restricted to equations with constant coefficients (\(ay'' + by' + cy = 0\)), or equations in special forms (like the Cauchy-Euler equations \(ax^2y'' + bxy' + cy = 0\)).
Example 2.1.1. Airy's equation.
A seemingly simple equation that cannot be solved with elementary techniques is Airy's equation, which is of the form
One problem that we run into is that many functions cannot be expressed as combinations of the elementary functions that we learn in calculus (for example, \(f(x) = \int e^{x^2}\, dx\)), and these functions are frequently the solutions to differential equations like the one in the example above. We turn, as we did in calculus, to power series representations of functions. Power series are one of the most important ideas developed in introductory calculus (though it is often the case that WHY you are learning about them isn't immediately obvious). We'll begin by recalled the definition of a power series, the question of when and where such a series converges, and some basic operations that we can perform
Subsection 2.1.2 Definition of power series and convergence
Definition 2.1.2.
An infinite series of the form
is called a power series centered at \(x = x_0\text{.}\)
Because we can always make the substitution \(u = x-x_0\text{,}\) it is typically sufficient to consider series centered at \(0\text{,}\)
A power series converges at a point \(x\) if the sequence of partial sums converges - that is,
exists and is equal to a finite value. The interval of convergence of a power series (centered at 0) is the largest interval of the form \(I = (-r,r)\) for which the series converges for each \(x \in (-r,r)\text{.}\) The quantity \(r\) is called the radius of convergence. There are three possibilities for the value of \(r\text{.}\)
Theorem 2.1.3.
For a series of the form given in (2.1.1), exactly one of the following holds.
- \(r = 0:\) \(\ps\) converges only for \(x = 0\text{.}\)
- \(r = \infty:\) \(\ps\) converges for every \(x \in \R\text{.}\)
- \(r\) is finite: There is a constant \(r > 0\) so that \(\ps\) converges for \(x \lt \abs{r}\) and diverges for \(x > \abs{r}\text{.}\)
The tool from calculus that is most useful in computing a radius of convergence for a power series is the ratio test (the reason is that only a single \(x\) will remain after computing the ratio).
Definition 2.1.4.
For the power series \(\ps\text{,}\) the radius of convergence \(r\) is given by \(r = \frac{1}{L}\) where \(L\) is the limit
If \(L = 0\text{,}\) then \(r = \infty\) (that is, the series converges for all values of \(x\)). If \(L = \infty\text{,}\) then \(r = 0\) (that is, the series only converges trivially at \(x = 0\)).
Example 2.1.5. Finding the radius of convergence.
Compute the radius of convergence of
We compute the limit
and so \(r = \frac{5}{2}\text{.}\)
Subsection 2.1.3 Algebra and calculus of power series
The beauty and utility in power series is that when they converge, they can be treated almost like giant polynomials. That is, we can perform algebra and calculs operations on convergent power series. To do so, first we can make a function out of a power series by restricting the domain to the interval of convergence (so that the function is well-defined). That is, if \(r\) is the radius of convergence for \(\ps\text{,}\) then we can define
Before we discuss operations involving power series, we note a fundamental fact (related to the identical fact about polynomials) - two power series are equal if and only if their corresponding coefficients are equal. That is,
When we combine power series, we have to make sure that we work on a domain where both functions are defined - that is, if \(f\) and \(g\) have radii of convergence \(r_1, r_2\) respectively, then the radius of convergence of an algebraic combination of \(f,g\) will be \(r = \min\{r_1,r_2\}\text{.}\)
Let \(f(x) = \ps\) and \(g(x) =\psg\) with common radius of convergence \(r\text{.}\)
- Addition: \(f(x) +g(x) = \displaystyle\sum_{n=0}^\infty (a_n + b_n) x^n\text{.}\)
- Scalar multiplication: \(c f(x) = \displaystyle\sum_{n=0}^\infty (ca_n) x^n\)
- Multiplication of series: \(f(x) g(x) = \displaystyle\sum_{n=0}^\infty c_n x^n\) where the coefficients \(c_n\) are computed by \(c_n = \displaystyle \sum_{k=0}^n a_{n-k} b_k\) (this is essentially the result of a giant distribution and collection of like terms)
Division can be defined as well (as long as the function in the denominator is not zero), but is used less often in what we will be studying
One of our first applications of power series will be to plug them into equations, in order to figure out the coefficients. If we know the coefficients of a function given by a power series, then we know the function. Typically, finding the coefficients will involve solving a recurrence relation, which is a recursive process for determining coefficients from some number of known coeffients.
Example 2.1.6. Solving a recurrence relation.
Assume that the coefficients in the power series
satisfy the equation
Find a formula that gives any \(a_n\) in terms of \(a_0\text{.}\)
First, to combine power series, we'll need both the powers of the like terms to match, and the series to have the same number of terms. Notice that if we replace \(n \rightarrow n + 1\) in the second series, and we change the range of the index accordingly, we get
As the powers of the corresponding terms match, and the range of indicies match, we can combine the series using addition to get
Because a convergent power series can only be equal to zero if the coefficients are all zero, we get the recurrence relation
which can be written
This gives a family of equations that we can use to find each coefficient in terms of \(a_0\) and eventually guess a formula that won't require us to work recursively.
We can guess that the general formula for \(a_n\) is
(Notice that I haven't claimed to have proved this, because that would require mathematical induction. For our purposes in this course, if a pattern looks to hold, we will assume that it continues indefinitely.)
Under the assumption that our formula for \(a_n\) holds for all \(n\text{,}\) we can say
which you might notice is the Taylor series at 0 for the function \(f(x) = a_0 e^x\text{.}\)
Just like algebra, we can perform calculus on power series in the obvious way (treating them like giant polynomials). Suppose that \(f\) is defined by a power series with domain equal to the interval of convergence, so that
Then we can differentiate \(f\) term by term to any order of derivative (after all, we're never going to run out of \(x\)es):
and so on. Power series can also be integrated, which is straightforward to write a formula for. (You might consider doing it as an exercise.)
Subsection 2.1.4 Analytic functions (a connection with complex analysis)
A function with a power series defined on some interval \((a,b)\) is particularly nice because the power series implies the existence of derivatives of all orders. Such functions are called smooth. (Note that smooth functions need not have a convergent power series, which is a weird but deep fact of real analysis!). Power series representations are so special, we name functions that have them as a special class.
Definition 2.1.7.
A function \(f\) with a convergent power series representation on some non-trivial interval of convergence \((a,b)\) is called an analytic function on \((a,b)\text{.}\)
Analytic functions are the nicest behaved functions in calculus and its applications. One of the major theorems of calculus is that if a function is analytic, then it has a unique power series representation, called the Taylor series for \(f\text{.}\) Even better, the series can be computed!
The functions of introductory calculus are nearly all analytic, many on the entire real line. You should be familiar with the power series expansions of those functions derived from exponential functions.
- \(\displaystyle \displaystyle e^x = 1 + x + \frac{1}{2!} x^2 + \frac{1}{3!} x^3 + \ldots = \sum_{n=0}^\infty \frac{1}{n!} x^n\)
- \(\displaystyle \displaystyle \cos x = 1 - \frac{1}{2!} x^2 + \frac{1}{4!} x^4 + \ldots = \sum_{n=0}^\infty \frac{(-1)^n}{(2n)!} x^{2n}\)
- \(\displaystyle \displaystyle \sin x = x - \frac{1}{3!} x^3 + \frac{1}{5!} x^5 + \ldots = \sum_{n=0}^\infty \frac{(-1)^n}{(2n + 1)!} x^{2n+ 1}\)
Because we can combine power series using algebra to get new power series, we can do the same with the analytic functions that they represent. That is, if \(f, g\) are analytic, so are algebraic combinations of them.
Theorem 2.1.8.
If \(f, g\) are analytic functions on a common interval \((a,b)\text{,}\) then so are \(f + g, f - g, fg\text{,}\) and \(\frac{f}{g}\) (as long as \(g(x) \neq 0\)).
The most useful examples of analytic functions are polynomials! (From this persepective, polynomials are just finite power series.) Polynomials are analytic at every point, and so therefore are \(f + g, f - g\) and \(f g\text{.}\) Things are a bit more complicated in the case of rational functions \(\frac{f}{g}\text{,}\) because the zeroes of \(g\) determine the radius of convergence of the quotient.
Here we come across one of the first places that the analysis of analytic functions is actually secretly taking place in the complex plane. Consider the function
\(1 + x^2\) doesn't possess any real zeroes. Does this mean that the power series for \(f\) converges everywhere? Let's see. Using the fact that \(\frac{1}{1 - x} = 1 + x + x^2 + \ldots\) for \(\abs{x} \lt 1\text{,}\) we can show that
as long as \(\abs{-x^2} \lt 1\) which is just \(\abs{x} \lt 1\text{.}\) So even though \(1 + x^2\) has no real zeros, the power series for \(f\) only had radius of convergence \(1\text{!}\) What's going on?
It turns out that when we're looking at quotients of analytic functions, we need to consider all of the zeros of the denominator, real and complex. For our example, \(1 + x^2 = 0\) has solutions \(\pm i\text{,}\) which are a distance of 1 away from the origin in the complex plane (0 on the real line). Since the closest zero to the center of our power series is 1 unit away, the radius of convergence is 1.
Theorem 2.1.9.
If \(p(x), q(x)\) are polynomials and \(q(x_0) \neq 0\text{,}\) then the radius of convergence of the power series representation is the distance from \(x_0\) to the closest root (real or complex) of \(q\text{.}\)
Example 2.1.10. Radius of convergence for a rational function.
Compute the radius of convergence of the power series centered at \(x = 0\) of
The zeroes of \(q\) are \(x_1= -3, x_2 = i\sqrt{3}, x_3 = -i\sqrt{3}\text{.}\) Using the distance formula, we can compute that the distances to the origin are \(d_1 = 3, d_2 = \sqrt{3}, d_3 = \sqrt{3}\text{,}\) and so the radius of convergence is \(r = \sqrt{3}\text{.}\)