Section 2.3 Legendre series
Subsection 2.3.1 Legendre's equation
Physics and applied mathematics provide a wealth of differential equations, some of which are sufficiently common to carry special names, notably
where \(c\) is some parameter that can take on any real value. All of these equations have an ordinary point at \(x = 0\text{,}\) and so have series solutions of the form \(y = \ps\text{.}\) The solutions to these equations for different values of \(c\) have special properties, particularly when \(c\) is an integer.
In this section, we'll investigate the solutions and properties of the Legendre equation. First, note that the equation as written is not in our standard form. We'll need to rewrite it to get an estimate for the interval of convergence of our series solution (though it will be convenient to leave it in its original form when we attempt to solve it.) In the form
we can see that \(p, q\) are analytic at the very least on the interval \((-1,1)\text{,}\) as the power series for \(\frac{1}{1- x^2}\) has radius of convergence \(1\text{.}\)
Armed with the knowledge that our solutions exist, we can follow the usual technique to derive the solutions.
Checkpoint 2.3.1.
Derive the general solutions to the Legendre equation using recurrence relations.
In general, the solutions to Legendre's equation for a fixed constant \(c\) are of the form
with
Subsection 2.3.2 Legendre polynomial
It turns out that the most important case of the Legendre equation occurs when the constant \(c\) is an integer, which we emphasize by calling it \(c=N\text{.}\) In that case, the recursion that determines the solution to the equation is
Since \(N\) is fixed, this means that \(a_{N+2} = 0\text{,}\) and moreover that the subsequent terms in the recursion are zero as well:
That is, either \(y_1\) or \(y_2\) is a finite power series - a polynomial - depending on if \(N\) is even or odd. These polynomials turn out to be quite useful.
Definition 2.3.2.
Let \(N\) be a nonnegative intger. The Legendre polynomial of degree \(N\), denoted \(P_N(x)\text{,}\) is the polynomial solution to
normalized so that \(P_N(1) = 1\text{.}\)
Example 2.3.3.
The first few Legendre polynomials are easy to write down. Remember that each polynomial is normalized to have \(P_N(1) = 1\text{.}\)
For larger values of \(N\text{,}\) we need a better method to produce these polynomials. Fortunately for us, computer algebra systems have functions that produce these polynomials very quickly. (The following is in the mathematical language Sage. Similar functions exist in MATLAB, Mathematica, Octave, python, etc.)
If we wish to proceeed by hand, there is an explicit formula for generating \(P_N(x)\) known as Rodrigues's formula:
Subsection 2.3.3 Legendre polynomials and vector spaces
Here we reach an instance of one of the most important larger themes of mathematical analysis - we can view functions with nice properties as vectors sitting inside an infinite dimensional vector space. With certain modifications, these spaces act very similarly to the familiar Euclidean spaces \(\R^n\text{.}\) One of the most important features of vector spaces is that vectors can be decomposed into orthogonal pieces with respect to some inner product in terms of a basis. For example, we can think of the Taylor series for a function as expressing that function in terms of the vectors \(1, x, x^2, \ldots\text{,}\) where the coefficients \(\frac{f^{(n)}(a)}{n!}\) are really the coordinates of the vector.
The continuous functions on the interval \([-1,1]\) are denoted \(C^0[-1,1]\text{.}\) (We're using symmetric intervals in this case because the Legendre polynomials are even or odd.) This space is a (infinite dimensional) inner product space with the dot product defined as
Following the analogy with \(\R^n\text{,}\) we define two vectors in an inner product space to be orthogonal if \(\ip{f}{g} = 0\text{.}\) Beyond a notion of “angle”, the inner product also gives a way to measure length with the norm defined in this case by
Theorem 2.3.4.
The Legendre polynomials are mutually orthogonal; that is,
Thus, the polynomials \(P_0, P_1, \ldots, P_n\) are an orthogonal basis for the space of polynomials of degree \(n\) or less.
Proof.
See Annin, Goode, p. 744.
Furthermore, one can show that for a fixed integer \(j\) that
One of the most important properties of an orthgonal basis for an inner product space is that generic vectors can be decomposed into orthogonal pieces. Suppose that we have the set of Legendre polynomials \(P_0, \ldots, P_N\) as an orthogonal basis for the polynomials of degree \(N\) or less. Then we can compute the expansion of a polynomial in terms of Legendre polynomials by the formula
This should look familiar, as we are finding the coefficients \(a_i\) by using the vector projection formula \(\proj_v u = \frac{u\cdot v}{\norm{v}^2} u\text{.}\) Since we know that \(\norm{P_i}^2 = \frac{2}{2i + 1}\text{,}\) we get the specific expansion
The real power of the expansion idea is that it holds for general functions, not just polynomials, under the additional assumption that \(f \in C^1(-1,1)\text{;}\) that is, both \(f, f'\) are continuous.
Theorem 2.3.5.
Suppose that \(f, f'\) are continuous on \((-1,1)\text{.}\) Then \(f\) has a Legendre series expansion on \((-1,1)\) given by
where each \(a_i\) is given by the vector projection
It is not at all obvious that an infinite series of functions should converge, nor what it means for a series of functions to converge to another function. Mathematicians wrestled with these ideas for more than a century establishing the basics of modern function theory. Proof of the theorem above will require you to consult an advanced textbook on spaces of functions. For now, accept the assertion that the theorem above does indeed have a proof and that mathematicians, physicists, and engineers have been using these sorts of ideas for far longer than they have been rigorously demonstrable.
You might wonder why we would prefer to work with Legendre series rather than the Taylor series that are so easy to compute - first, notice that Taylor series only exist for analytic functions, where here we have Legendre series for functions that are merely continuously differentiable. Secondly, it turns out that working numerically with Taylor approximations is actually pretty terrible. The term used in numerical analysis is ill-conditioned. Legendre expansions are significantly more stable in computations. In many cases, Legendre series also converge to the function being approximated much more quickly than Taylor series, which we can illustrate with the following example.
Example 2.3.6. Taylor and Legendre approximation of \(\sin \pi x\text{.}\).
The function \(\sin \pi x\) completes one period on the interval \((-1, 1)\) and obviously has a continuous derivative. The Taylor expansion of \(\sin \pi x\) to degree 5 is given by the formula
It is straightforward to compute the first few terms in the Legendre expansion as follows: First, note that since sine is odd, there will be no terms of even degree. That is, \(\sin x = a_1 P_1(x) + a_3 P_3(x) + a_5 P_5(x)\text{.}\)
Let us compare these approximations.
Notice how much deviation there is in the Taylor approximation towards the end of the interval, as compared to the Legendre approximation which even at 5th degree fits right into the sine function.