Skip to main content

Section 2.3 Legendre series

Subsection 2.3.1 Legendre's equation

Physics and applied mathematics provide a wealth of differential equations, some of which are sufficiently common to carry special names, notably

\begin{gather*} (1 - x^2)y'' - 2x y' + c(c + 1)y = 0, \,\,\, \text{ (Legendre's equation)}\\ y'' - 2xy' + 2cy = 0, \,\,\, \text{ (Hermite's equation)}\\ (1 - x^2) y'' - xy' + c^2 y = 0 \,\,\, \text{ (Chebyshev's equation)} \end{gather*}

where \(c\) is some parameter that can take on any real value. All of these equations have an ordinary point at \(x = 0\text{,}\) and so have series solutions of the form \(y = \ps\text{.}\) The solutions to these equations for different values of \(c\) have special properties, particularly when \(c\) is an integer.

In this section, we'll investigate the solutions and properties of the Legendre equation. First, note that the equation as written is not in our standard form. We'll need to rewrite it to get an estimate for the interval of convergence of our series solution (though it will be convenient to leave it in its original form when we attempt to solve it.) In the form

\begin{equation*} y'' - \frac{2x}{1 - x^2} y' + \frac{c(c+1)}{1 - x^2} y = 0, \end{equation*}

we can see that \(p, q\) are analytic at the very least on the interval \((-1,1)\text{,}\) as the power series for \(\frac{1}{1- x^2}\) has radius of convergence \(1\text{.}\)

Armed with the knowledge that our solutions exist, we can follow the usual technique to derive the solutions.

Derive the general solutions to the Legendre equation using recurrence relations.

In general, the solutions to Legendre's equation for a fixed constant \(c\) are of the form

\begin{equation*} y = y_1(x) + y_2(x) \end{equation*}

with

\begin{align*} y_1(x) \amp= a_0\left[1 - \frac{c(c+1)}{2}x^2 + \frac{(c-2)c(c+1)(c+3)}{4!} x^4\right. \\ \amp\left.\hspace{1in} - \frac{(c-4)(c-2)c(c+1)(c+3)(c+5)}{6!} x^6 +\ldots\right]\\ \\ y_2(x) \amp= a_1\left[x - \frac{(c-1)(c+2)}{3!} x^3 + \frac{(c-3)(c-1)(c+2)(c+4)}{5!}x^5 + \ldots\right] \end{align*}

Subsection 2.3.2 Legendre polynomial

It turns out that the most important case of the Legendre equation occurs when the constant \(c\) is an integer, which we emphasize by calling it \(c=N\text{.}\) In that case, the recursion that determines the solution to the equation is

\begin{equation*} a_{n+2} = -\frac{n(n+1) - N(N+1)}{(n+1)(n+2)} a_n. \end{equation*}

Since \(N\) is fixed, this means that \(a_{N+2} = 0\text{,}\) and moreover that the subsequent terms in the recursion are zero as well:

\begin{equation*} a_{N+2} = a_{N+4} = a_{N+6} = \ldots = 0. \end{equation*}

That is, either \(y_1\) or \(y_2\) is a finite power series - a polynomial - depending on if \(N\) is even or odd. These polynomials turn out to be quite useful.

Definition 2.3.2.

Let \(N\) be a nonnegative intger. The Legendre polynomial of degree \(N\), denoted \(P_N(x)\text{,}\) is the polynomial solution to

\begin{equation*} (1- x^2) y'' - 2 x y' + N(N+1)y = 0, \end{equation*}

normalized so that \(P_N(1) = 1\text{.}\)

The first few Legendre polynomials are easy to write down. Remember that each polynomial is normalized to have \(P_N(1) = 1\text{.}\)

\begin{align*} N \amp= 0:y_1(x) = a_0 \Rightarrow P_1(x) = 1.\\ N \amp= 1: y_2(x) = a_1 x \Rightarrow P_2(x) = x.\\ N \amp= 2: y_1(x) = a_0(1 - 3x^2) \Rightarrow P_2(x) = \frac{1}{2}(3x^2 -1). \end{align*}

For larger values of \(N\text{,}\) we need a better method to produce these polynomials. Fortunately for us, computer algebra systems have functions that produce these polynomials very quickly. (The following is in the mathematical language Sage. Similar functions exist in MATLAB, Mathematica, Octave, python, etc.)

If we wish to proceeed by hand, there is an explicit formula for generating \(P_N(x)\) known as Rodrigues's formula:

\begin{equation*} P_N(x) = \frac{1}{2^N N!} \frac{d^N}{dx^N} (x^2 - 1)^N, \hspace{1cm} N = 0, 1, 2, \ldots \end{equation*}

Subsection 2.3.3 Legendre polynomials and vector spaces

Here we reach an instance of one of the most important larger themes of mathematical analysis - we can view functions with nice properties as vectors sitting inside an infinite dimensional vector space. With certain modifications, these spaces act very similarly to the familiar Euclidean spaces \(\R^n\text{.}\) One of the most important features of vector spaces is that vectors can be decomposed into orthogonal pieces with respect to some inner product in terms of a basis. For example, we can think of the Taylor series for a function as expressing that function in terms of the vectors \(1, x, x^2, \ldots\text{,}\) where the coefficients \(\frac{f^{(n)}(a)}{n!}\) are really the coordinates of the vector.

The continuous functions on the interval \([-1,1]\) are denoted \(C^0[-1,1]\text{.}\) (We're using symmetric intervals in this case because the Legendre polynomials are even or odd.) This space is a (infinite dimensional) inner product space with the dot product defined as

\begin{equation*} f \cdot g = \ip{f}{g} = \int_{-1}^1 f(x) g(x) \, dx. \end{equation*}

Following the analogy with \(\R^n\text{,}\) we define two vectors in an inner product space to be orthogonal if \(\ip{f}{g} = 0\text{.}\) Beyond a notion of “angle”, the inner product also gives a way to measure length with the norm defined in this case by

\begin{equation*} \norm{f} = \sqrt{\ip{f}{f}} = \sqrt{\int_{-1}^1 f(x)^2 \, dx}. \end{equation*}

See Annin, Goode, p. 744.

Furthermore, one can show that for a fixed integer \(j\) that

\begin{equation*} \norm{P_j} = \sqrt{\frac{2}{2j+1}}. \end{equation*}

One of the most important properties of an orthgonal basis for an inner product space is that generic vectors can be decomposed into orthogonal pieces. Suppose that we have the set of Legendre polynomials \(P_0, \ldots, P_N\) as an orthogonal basis for the polynomials of degree \(N\) or less. Then we can compute the expansion of a polynomial in terms of Legendre polynomials by the formula

\begin{equation*} p(x) = \sum_{i=0}^{N} a_i P_i = \sum_{i=0}^n \frac{\ip{p}{P_i}}{\norm{P_i}^2} P_i. \end{equation*}

This should look familiar, as we are finding the coefficients \(a_i\) by using the vector projection formula \(\proj_v u = \frac{u\cdot v}{\norm{v}^2} u\text{.}\) Since we know that \(\norm{P_i}^2 = \frac{2}{2i + 1}\text{,}\) we get the specific expansion

\begin{equation*} p(x) = \sum_{i=1}^N a_i P_i(x), \text{ where } a_i = \frac{2i+1}{2} \int_{-1}^1 p(x) P_i(x) \, dx. \end{equation*}

The real power of the expansion idea is that it holds for general functions, not just polynomials, under the additional assumption that \(f \in C^1(-1,1)\text{;}\) that is, both \(f, f'\) are continuous.

It is not at all obvious that an infinite series of functions should converge, nor what it means for a series of functions to converge to another function. Mathematicians wrestled with these ideas for more than a century establishing the basics of modern function theory. Proof of the theorem above will require you to consult an advanced textbook on spaces of functions. For now, accept the assertion that the theorem above does indeed have a proof and that mathematicians, physicists, and engineers have been using these sorts of ideas for far longer than they have been rigorously demonstrable.

You might wonder why we would prefer to work with Legendre series rather than the Taylor series that are so easy to compute - first, notice that Taylor series only exist for analytic functions, where here we have Legendre series for functions that are merely continuously differentiable. Secondly, it turns out that working numerically with Taylor approximations is actually pretty terrible. The term used in numerical analysis is ill-conditioned. Legendre expansions are significantly more stable in computations. In many cases, Legendre series also converge to the function being approximated much more quickly than Taylor series, which we can illustrate with the following example.

The function \(\sin \pi x\) completes one period on the interval \((-1, 1)\) and obviously has a continuous derivative. The Taylor expansion of \(\sin \pi x\) to degree 5 is given by the formula

\begin{equation*} T_5(x) = \pi x - \frac{\pi^3 x^3}{6} + \frac{\pi^5 x^5}{120} \end{equation*}

It is straightforward to compute the first few terms in the Legendre expansion as follows: First, note that since sine is odd, there will be no terms of even degree. That is, \(\sin x = a_1 P_1(x) + a_3 P_3(x) + a_5 P_5(x)\text{.}\)

\begin{align*} a_1 \amp= \frac{2(1) + 1}{2} \int_{-1}^1 x \sin \pi x\, dx = \frac{3}{\pi}\\ a_3 \amp= \frac{2(3) + 1}{2} \int_{-1}^1 \frac{1}{2}(3x^2 - 1) \sin\pi x \, dx = \frac{7}{\pi^3}(\pi^2 - 15)\\ a_5 \amp= \frac{2(5) + 1}{2} \int_{-1}^1 P_5(x) \, dx = \frac{11}{\pi^5} (945 - 105\pi^2 + \pi^4) \end{align*}

Let us compare these approximations.

Notice how much deviation there is in the Taylor approximation towards the end of the interval, as compared to the Legendre approximation which even at 5th degree fits right into the sine function.