Section 2.2 Series solutions at ordinary points
Subsection 2.2.1 Ordinary points
Consider the second order linear differential equation
The fact that we have functions as “coefficients” for the terms means that our previous tools do not apply. Instead, we'll try to discover solutions using power series and recurrence relations. It will be much easier to do this if \(p\) and \(q\) are well-behaved where we center our power series.
Definition 2.2.1.
A point \(x = x_0\) is an ordinary point for the differential equation
if \(p\) and \(q\) are analytic at \(x = x_0\text{.}\) If a point is not ordinary, then it is called a singular point of the equation.
Example 2.2.2. Finding ordinary points.
Find the ordinary points of the differential equation
Because \(p, q\) fail to be analytic at \(x = \pm 1, 3\) the equation is singular at those three points. Every other point is an ordinary point.
Subsection 2.2.2 Power series solutions at ordinary points
We now give an example of solving a differential equation with series solutions. We'll start with a problem that we already know the solution to, before moving on to solving Airy's equation. It will be useful to recall the expressions for the derivatives of convergent power series. If \(y = \ps\text{,}\) then
Example 2.2.3. Finding series solutions.
Find the general solution to the differential equation \(y'' + y = 0\) using series methods.
We expect to find two linearly independent solutions, and we expect the general solution to be of the form \(y = c_1 y_1 + c_2 y_2\) for arbitrary constants \(c_1, c_2\) because we have a second order equation. Notice that \(p = 0\) and \(q = 1\) which are analytic at every point, so we will work at the center \(x = 0\) and assume that our solution is of the form \(y = \ps\text{.}\)
On taking derivatives and plugging into the differential equation, we get
We want to combine these into a single sum and use a recurrence relation to find the solutions, but we need the terms and the index ranges to match. If we shift the index on the first sum by letting \(n - 2 \to n \text{,}\) we get
Now we can combine the series to get
which gives the family of coefficient equations
which is convenient to rearrange as
Notice that we need two pieces of information to determine all of the information contained in this recursion: \(n = 0\) starts a chain that gives the coefficients for the even values of \(n\text{,}\) and \(n = 1\) starts a chain that gives that information for the odd values (corresponding to the two separate independent solutions). We consider each family separately.
That is, the even terms seem to be of the form \(a_{2k} = \frac{(-1)^k}{(2k!)} a_0\text{.}\)
For the odd values of \(n\text{,}\) we get
which looks like \(a_{2k+1} = \frac{(-1)^k}{(2k+1)!} a_1\text{.}\)
Now that we know the coeffients of the power series, we can write
which allows us to write \(y = a_0 y_1 + a_0 y_2\) where
An easy application of the rato test shows that each of these series converges for all values of \(x\) (that is, the radius of converge is \(\infty\)).
Finally, we need to verify that these series are linearly independent. With two objects, this will be true as long as the functions aren't scalar multiples of each other, which is indeed the case.
(We might note that none of this is surprising in this case because the series in question have very nice closed forms you should recongize on sight!)
At an ordinary point, we will always follow the steps in the example above.
- Assume that a solution of the form \(y = \ps\) exists.
- Plug the series into the equation to get a recurrence relation and find the coefficients in terms of \(a_0\) and \(a_1\text{.}\)
- Use the ratio test to find the radius of convergence (which is where out solution is valid).
- Verify that the solutions are linearly independent.
Theorem 2.2.4.
Let \(p, q\) be analytic functions on some common interval \(\abs{x - x_0} \lt r\) centered at \(x_0\text{.}\) Then the general solution to the second order differential equation
can be represented as power series centered at \(x_0\) on the same interval. The coefficients in the series can be described in terms of the initial data \(a_0, a_1\text{,}\) and the general solution can be rearranged into the form
where \(y_1, y_2\) are linearly independent.
We conclude this chapter with the solution of Airy's equation.
Example 2.2.5. Airy's equation.
Solve the equation
Since \(p = 0\) and \(q = x\text{,}\) every point is an ordinary point. So we assume that a series solution exists of the form \(y = \ps\text{.}\) Plugging into the equation, we get
Multiplying through, we get
First, we'll make the powers match, then we'll shift indicies as necessary.
Now that the powers of \(x\) match, the easiest way to make the index ranges match is to pop the first term off of the first sum.
Before we combine, notice that right away we can see that \(a_2 = 0\text{.}\) This is going to have an interesting effect on the resulting series.
Setting the coefficients to be 0, we get the family of relations
Because there is a separation of three index elements between the terms in the recursion, there will be three families of equations. Immediately, we can see that because \(a_2 = 0\text{,}\) the recursion imples that \(a_5, a_8, a_{11}, \ldots = 0\) as well. (From the theorem, we know that we should be able to get all the information we need in terms of \(a_0\) and \(a_1\text{.}\))
For the set of coefficients beginning with \(a_0\text{,}\) we have
There is a clear pattern, though it is annoying to write in closed form.
For the set of coefficients beginning with \(a_1\text{,}\) we have
Then the general solution to the equation is
where
These power series are convergent everywhere, obviously linearly independent, and unfortunately have no closed form; that is, series representations is the best that we can do.