Section 2.4 Series solutions at regular singularities
Subsection 2.4.1 Types of singular points
So far, we've looked at differential equations at ordinary points, specifically
That is, we've assumed that analytic solutions exist and proceeded with power series methods. However, a major area of interest in many applied settings is what happens at points where
p
and \(q\) fail to be analytic. These singular points are often very important to consider when analyzing a system. (You may need to know what happens to the behavior of the system as the input approaches a singular point.)Definition 2.4.1.
A point \(x = x_0\) is a regular singular point of the equation
if
- \(x_0\) is a singular point of the equation,
- and both\begin{equation*} \hat{p}(x) = (x - x_0)p(x), \hspace{1cm} \hat{q(x)}=(x-x_0)^2 q(x) \end{equation*}are analytic at \(x = x_0\)
A singular point that does not satisfy these conditions is called an irregular singular point.
To give you some terminology (and a connection with complex analysis), a function \(f(x)\) for which \(f(x)(x = x_0)^n\) is analytic at \(x_0\) for \(n = N\) but singular at \(x_0\) for \(n \lt N\) is said to have a pole of order \(N\) at \(x_0\). The point of the definition above is that for a second order equation, essentially the worst singular behavior we can have and still work with series directly is second order poles. Similar statements apply to higher order equations.
Example 2.4.2. Classifying points.
Classify the points of the equation
The equation clearly has singularities at \(x = 0, 1\text{.}\) For \(x = 0\text{,}\)
both of which are analytic at \(x = 0\text{.}\) Thus, \(x =0\) is a regular singular point.
On the other hand, at \(x = 1\text{,}\) we have
which is singular at \(x = 1\text{.}\) Thus, \(x =1\) is an irregular singular point.
Subsection 2.4.2 Series solutions at regular singular points
Suppose that \(x = x_0\) is a regular singular point for the equation
Essentially we can remove the singular behavior by multiplying through by \((x - x_0)^2\) to get an equation of the form
where \(\hat{p} = (x-x_0)p(x), \hat{q} = (x - x_0)^2 q(x)\) are analytic at \(x_0\text{.}\) Going one step further, we can make the substitution \(u = x - x_0\text{,}\) which moves the regular singular point to \(x=0\text{,}\) and so we can restrict our attention to equations of the form
where \(p, q\) are analytic at \(0\text{.}\) (We've essentially just pulled the singular behavior out of the coefficient functions to make it easier to analyze.)
The most elementary equation in this family is the Cauchy-Euler equation, where \(p, q\) are just constants \(p_0, q_0\text{.}\) In that case, we have a second order linear constant coefficient homogeneous equation, which we can solve using the method of the indicial equation.
Theorem 2.4.3.
Suppose we have a Cauchy-Euler equation
The equation has solutions on \((0,\infty)\) of the form \(y = x^r\text{,}\) where \(r\) is a solution to the indicial equation
We can use the constant case to guide our method for more general coefficients. Suppose we have an equation with coefficient functions
with \(p,q\) analytic at \(0\text{.}\) Then we can assume that there is some positive radius of convergence \(R\) for which \(p, q\) both have convergent series expansions
We can plug these in to get the equation
Now we have to make a guess. Notice that if \(x\) is very small, then \(p(x) \approx p_0\) and \(q(x) \approx q_0\text{,}\) which means that near 0, the equation is approximately the Cauchy-Euler equation:
So we guess that the form of the series solution includes a term \(x^r\) that describes the behavior of the function near \(x = 0\) in \((0,R)\) (where we're looking at a Cauchy-Euler equation) and a piece in terms of a power series that picks up the behavior away from \(0\text{.}\)
That is, using this intuition, we can guess that for the equation
solutions will be of the form
where as before, \(r\) is a solution to the indicial equation
This turns out to be the right idea, and series of the form (2.4.2) are called Frobenius series.
Theorem 2.4.4.
For \(x > 0\text{,}\) suppose that
and that \(p, q\) are analytic at 0 with a mutual radius of convergence \(R\text{.}\) Write \(p(x) = \sum p_n x^n, q(x) = \sum q_n x^n\text{.}\) Let \(r_1, r_2\) be roots of the indicial equation (2.4.1), and assume that \(r_1 \geq r_2\) if the roots are real.
Then (2.4.3) has a Frobenius series solution of the form
which converges at least on \((0,R)\text{.}\)
If \(r_1, r_2\) are distinct and do not differ by an integer, then there exists a second linearly independent Frobenius series solution of the form
The reason that we require the roots not to differ by an integer is that if they do, the resulting solutions will not be linearly independent. We'll address that case in the next section. First, let's look at an example of finding Frobenius series solutions. It is useful to note the following derivatives of Frobenius series:
Example 2.4.5. Using recursion to find a Frobenius series.
Find two linearly independent solutions to the equation
To check the form of the solutions, we'll put the problem into standard form -
so that \(p (x) = \frac{3}{4}, q(x) = x\text{,}\) both of which are analytic on \((0 \infty)\text{.}\) We need \(p_0 = p(0) = \frac{3}{4}, q_0 = q(0) = 0\) to find \(r_1, r_2\text{.}\) The indicial equation is therefore
which has roots \(r = 0, r = \frac{1}{4}\text{.}\) These are distinct real roots that do not differ by an integer, so we know that we have two linearly independent Frobenius solutions of the form
Plugging in the expressions for the series derivatives, we get
To combine the series, we can pull off the term for \(n=0\) on the left sums and get the expression
which we should note leads directly to the indicial equation. For the remaining terms, we combine the series and get the recurrence relation
which simplifies to
Now we can use the values of \(r\) that we determined before to write the Frobenius series for each value.
\(r = 0\text{:}\) Here, we have a standard power series recursion
which gives
In general, this pattern looks something like
Thus, the Frobenius series is
\(r = \frac{1}{4}\text{:}\) We'll use \(b_i\) to denote the coefficients of the second solution. Plugging in \(r\text{,}\) we get the recursion
which will give
Thus, a second independent solution is
We could choose any value for \(a_0, b_0\) and have a solution to (2.4.6), so it is convenient to let both be 1, and then write