Section 2.7 Series as vectors (an introduction to \(L^2\))
One of the major themes of this section is that functions are secretly vectors. Even better, functions with some level of regularity are vectors in an inner product space, which means that we can use basic ideas from linear algebra to understand them (once we know which space they are in). The key idea in all of this is that the integral is really just an infinite sum indexed by the input \(x\text{,}\) and so the dot product is really just a finite dimensional version of an integral (in fact, there are areas of math where this is made explicit, as in probability theory).
With Legendre polynomials and Bessel functions, we've demonstrated that \(C^1\) functions live inside an infinite dimensional inner product space. Just as in finite dimensional linear algebra, we used the orthogonal expansion theorem to write \(f\) as
for some orthogonal basis \(b_n\text{.}\) We pick a set of basis function based on the particular problem we are trying to understand. For example, if we're doing numerical estimation of a function, we can choose \(b_n\) to be the Legendre polynomials and get better approximations that the basic Taylor series. If we're studying functions acting in a system associated with a Bessel equation, we can express the function in terms of the solutions to that system to understand how the wave behavior is driving the function.
The theoretical backdrop for all of this is the theory of Hilbert space, which is where questions of convergence are first addessed. One of the most important Hilbert spaces is called \(L^2[a,b]\) - these are the functions \(f\) defined on an interval \([a,b]\) with finite energy. A function is in \(L^2\) if
(where \(dx\) is a more powerful type of differential called Lebesgue measure). But this is precisely the inner product we considered with the Legendre polynomials:
Taylor polynomials and Legendre polynomials are outstanding bases to use for \(L^2\) if we're interested in looking at local behavior of a function (say zeros or extreme points). But lots of other functions live in \(L^2\) that don't have nice behavior that we might be interested in looking at. For example, we might have an equation that models a circuit, and we want to feed in a square wave to see what the response might be.
Now, we can come up with a Legendre polynomial expansion for this function (at least on the piecewise bits), but it isn't going to really capture what's interesting about the square wave - a square wave is periodic. If only there was a basis of functions for \(L^2\) that was orthogonal and periodic and that could be used to model functions like the square wave. Such a basis would be extremely useful for solving differential equations involving oscillations. If only....