Skip to main content

Section 2.4 Series solutions at regular singularities

Subsection 2.4.1 Types of singular points

So far, we've looked at differential equations at ordinary points, specifically

\begin{equation*} y'' + p(x)y' + q(x)y = 0. \end{equation*}

That is, we've assumed that analytic solutions exist and proceeded with power series methods. However, a major area of interest in many applied settings is what happens at points where

p

and \(q\) fail to be analytic. These singular points are often very important to consider when analyzing a system. (You may need to know what happens to the behavior of the system as the input approaches a singular point.)

Definition 2.4.1.

A point \(x = x_0\) is a regular singular point of the equation

\begin{equation*} y'' + p(x)y' + q(x) y= 0 \end{equation*}

if

  1. \(x_0\) is a singular point of the equation,
  2. and both
    \begin{equation*} \hat{p}(x) = (x - x_0)p(x), \hspace{1cm} \hat{q(x)}=(x-x_0)^2 q(x) \end{equation*}
    are analytic at \(x = x_0\)

A singular point that does not satisfy these conditions is called an irregular singular point.

To give you some terminology (and a connection with complex analysis), a function \(f(x)\) for which \(f(x)(x = x_0)^n\) is analytic at \(x_0\) for \(n = N\) but singular at \(x_0\) for \(n \lt N\) is said to have a pole of order \(N\) at \(x_0\). The point of the definition above is that for a second order equation, essentially the worst singular behavior we can have and still work with series directly is second order poles. Similar statements apply to higher order equations.

Classify the points of the equation

\begin{equation*} y'' + \frac{1}{x(x-1)^2}y' + \frac{x}{x(x-1)^3} y = 0 \end{equation*}

The equation clearly has singularities at \(x = 0, 1\text{.}\) For \(x = 0\text{,}\)

\begin{equation*} \hat{p}(x) = x p(x) = \frac{1}{x - 1}, \hspace{1cm} \hat{q}(x) = xq(x) \frac{1}{(x-1)^3}, \end{equation*}

both of which are analytic at \(x = 0\text{.}\) Thus, \(x =0\) is a regular singular point.

On the other hand, at \(x = 1\text{,}\) we have

\begin{equation*} \hat{p}(x) = (x - 1) p(x) = \frac{1}{x(x-1)}, \end{equation*}

which is singular at \(x = 1\text{.}\) Thus, \(x =1\) is an irregular singular point.

Subsection 2.4.2 Series solutions at regular singular points

Suppose that \(x = x_0\) is a regular singular point for the equation

\begin{equation*} y'' + p(x)y' + q(x)y = 0. \end{equation*}

Essentially we can remove the singular behavior by multiplying through by \((x - x_0)^2\) to get an equation of the form

\begin{equation*} (x - x_0)^2 y'' + (x-x_0)\hat{p}(x) y' + \hat{q}(x) y = 0 \end{equation*}

where \(\hat{p} = (x-x_0)p(x), \hat{q} = (x - x_0)^2 q(x)\) are analytic at \(x_0\text{.}\) Going one step further, we can make the substitution \(u = x - x_0\text{,}\) which moves the regular singular point to \(x=0\text{,}\) and so we can restrict our attention to equations of the form

\begin{equation*} x^2 + x p(x) y' + q(x) y = 0 \end{equation*}

where \(p, q\) are analytic at \(0\text{.}\) (We've essentially just pulled the singular behavior out of the coefficient functions to make it easier to analyze.)

The most elementary equation in this family is the Cauchy-Euler equation, where \(p, q\) are just constants \(p_0, q_0\text{.}\) In that case, we have a second order linear constant coefficient homogeneous equation, which we can solve using the method of the indicial equation.

We can use the constant case to guide our method for more general coefficients. Suppose we have an equation with coefficient functions

\begin{equation*} x^2 y'' + xp(x)y' + q(x) y = 0 \end{equation*}

with \(p,q\) analytic at \(0\text{.}\) Then we can assume that there is some positive radius of convergence \(R\) for which \(p, q\) both have convergent series expansions

\begin{align*} p(x) \amp= p_0 + p_1 x + p_2 x^2 + \ldots\\ q(x) \amp= q_0 + q_1 x + q_2 x^2 + \ldots \end{align*}

We can plug these in to get the equation

\begin{equation*} x^2 y'' + x\left(p_0 + p_1 x + p_2 x^2 + \ldots\right)y' + \left(q_0 + q_1 x + q_2 x^2 + \ldots \right)y = 0. \end{equation*}

Now we have to make a guess. Notice that if \(x\) is very small, then \(p(x) \approx p_0\) and \(q(x) \approx q_0\text{,}\) which means that near 0, the equation is approximately the Cauchy-Euler equation:

\begin{equation*} x^2 y'' + x\left(p_0 + p_1 x + p_2 x^2 + \ldots\right)y' + \left(q_0 + q_1 x + q_2 x^2 + \ldots \right)y \approx x^2 y'' + p_0 x y' + q_0 y = 0. \end{equation*}

So we guess that the form of the series solution includes a term \(x^r\) that describes the behavior of the function near \(x = 0\) in \((0,R)\) (where we're looking at a Cauchy-Euler equation) and a piece in terms of a power series that picks up the behavior away from \(0\text{.}\)

That is, using this intuition, we can guess that for the equation

\begin{equation*} x^2 y'' + x p(x) y' + q(x) y = 0, \end{equation*}

solutions will be of the form

\begin{equation} y(x) = x^r \sum_{n=0}^\infty a_n x^n, \,\,\,a_0 \neq 0\label{eq-frob}\tag{2.4.2} \end{equation}

where as before, \(r\) is a solution to the indicial equation

\begin{equation*} r(r-1) + p_0 r + q_0 = 0. \end{equation*}

This turns out to be the right idea, and series of the form (2.4.2) are called Frobenius series.

The reason that we require the roots not to differ by an integer is that if they do, the resulting solutions will not be linearly independent. We'll address that case in the next section. First, let's look at an example of finding Frobenius series solutions. It is useful to note the following derivatives of Frobenius series:

\begin{equation*} y(x) = x^r \sum_{n=0}^\infty a_n x^n = \sum_{n=0}^\infty a_n x^{n + r} \end{equation*}
\begin{equation} y'(x) = \sum_{n=0}^\infty (n+r) a_n x^{n + r - 1}\label{eq-frob-first}\tag{2.4.4} \end{equation}
\begin{equation} y''(x) = \sum_{n=0}^\infty (n+r)(n+r - 1) a_n x^{n + r - 2}.\label{eq-frob-second}\tag{2.4.5} \end{equation}

Find two linearly independent solutions to the equation

\begin{equation} 4x^2 y'' + 3 x y' + xy = 0.\label{ex-frob}\tag{2.4.6} \end{equation}

To check the form of the solutions, we'll put the problem into standard form -

\begin{equation*} x^2 y'' + x (\frac{3}{4}) y' + x y = 0, \end{equation*}

so that \(p (x) = \frac{3}{4}, q(x) = x\text{,}\) both of which are analytic on \((0 \infty)\text{.}\) We need \(p_0 = p(0) = \frac{3}{4}, q_0 = q(0) = 0\) to find \(r_1, r_2\text{.}\) The indicial equation is therefore

\begin{equation*} r(r-1) + \frac{3}{4} r + 0 = 0 \end{equation*}

which has roots \(r = 0, r = \frac{1}{4}\text{.}\) These are distinct real roots that do not differ by an integer, so we know that we have two linearly independent Frobenius solutions of the form

\begin{equation*} y_1(x) = \sum_{n=0}^\infty a_n x^n, \,\,\, y_2(x) = x^{1/4} \sum_{n=0}^\infty b_n x^n. \end{equation*}

Plugging in the expressions for the series derivatives, we get

\begin{align*} 0 \amp = 4x^2 \sum_{n=0}^\infty (n+r)(n+r - 1) a_n x^{n + r - 2} + 3 x \sum_{n=0}^\infty (n+r) a_n x^{n + r - 1} + x \sum_{n=0}^\infty a_n x^{n + r}\\ \amp = \sum_{n=0}^\infty 4(n+r)(n+r -1) a_n x^{n + r} + \sum_{n=0}^\infty 3(n+r) a_n x^{n + r} + \sum_{n=0}^\infty a_n x^{n + r + 1}\\ \amp = \sum_{n=0}^\infty 4(n+r)(n+r -1) a_{n - 1} x^{n + r} + \sum_{n=0}^\infty 3(n+r) a_n x^{n + r} + \sum_{n=1}^\infty a_{n - 1} x^{n + r} \end{align*}

To combine the series, we can pull off the term for \(n=0\) on the left sums and get the expression

\begin{equation*} x^r (4r(r-1) + 3r)a_0 = 0 \end{equation*}

which we should note leads directly to the indicial equation. For the remaining terms, we combine the series and get the recurrence relation

\begin{equation*} [4(n + r)(n+r - 1) + 3(n+r)]a_n + a_{n-1} = 0 \end{equation*}

which simplifies to

\begin{equation*} a_n = \frac{1}{4(n+r)(n+r - 1) + 3(n+ r)} a_{n-1} \end{equation*}

Now we can use the values of \(r\) that we determined before to write the Frobenius series for each value.

\(r = 0\text{:}\) Here, we have a standard power series recursion

\begin{equation*} a_n = \frac{1}{4n(n-1) + 3n} a_{n-1} = \frac{1}{n(4n - 1)} a_{n-1} \end{equation*}

which gives

\begin{align*} n \amp= 1: a_1 = \frac{1}{1 \cdot3 } a_0\\ n\amp =2: a_2 = \frac{1}{2 \cdot 7}a_1 = \frac{1}{1 \cdot 3 \cdot 2 \cdot 7}a_0 = \frac{1}{2! (3 \cdot 7)}a_0\\ n\amp = 3: a_3 = \frac{1}{3 \cdot 11} a_2 = \frac{1}{3!(3 \cdot 7 \cdot 11)} a_0 \end{align*}

In general, this pattern looks something like

\begin{equation*} a_n = \frac{1}{n! (3 \cdot 7 \cdot \ldots \cdot (4n -1))} a_0 \end{equation*}

Thus, the Frobenius series is

\begin{equation*} y_1(x) = a_0 x^0 \sum_{n=0}^\infty \frac{1}{n! (3 \cdot 7 \cdot \ldots \cdot (4n -1))} x^n, \,\,\, x \in (0, \infty). \end{equation*}

\(r = \frac{1}{4}\text{:}\) We'll use \(b_i\) to denote the coefficients of the second solution. Plugging in \(r\text{,}\) we get the recursion

\begin{align*} a_n \amp= \frac{1}{4(n+\frac{1}{4})(n+\frac{1}{4} - 1) + 3(n+ \frac{1}{4})} a_{n-1}\\ \amp= \frac{1}{n(4n+1)} a_{n-1} \end{align*}

which will give

\begin{equation*} a_n = \frac{1}{n!(5 \cdot 9 \cdot \ldots \cdot (4n + 1))} a_0. \end{equation*}

Thus, a second independent solution is

\begin{equation*} y_2(x) = b_0 x^{1/4} \sum_{n=1}^\infty \frac{1}{n!(5 \cdot 9 \cdot \ldots \cdot (4n + 1))} x^n, \,\,\, x \in (0\infty). \end{equation*}

We could choose any value for \(a_0, b_0\) and have a solution to (2.4.6), so it is convenient to let both be 1, and then write

\begin{align*} y_1(x) \amp= \sum_{n=0}^\infty \frac{1}{n! (3 \cdot 7 \cdot \ldots \cdot(4n -1))} x^n, \,\,\, x \in (0, \infty)\\ y_2(x) \amp = x^{1/4} \sum_{n=1}^\infty \frac{1}{n!(5 \cdot 9 \cdot \ldots \cdot (4n + 1))} x^n, \,\,\, x \in (0,\infty) \end{align*}