Section 2.1 Inner products and \(\ell^2\)
Recall that the inner product on \(\C^n\) is the familiar expression
\begin{equation*}
\ip{x}{y} = \sum_i \cc y_i x_i,
\end{equation*}
where \(x = (x_1, \ldots, x_n)\text{.}\) How can this be extended to infinite dimensions?
Definition 2.1.1.
For a general complex vector space \(V\text{,}\) an inner product is a map \(\ip{\cdot}{\cdot}: V \times V \to \C\) satisfying the following properties for all \(x, y, z \in V\) and scalars \(c \in \C\text{:}\)
\(\ip{x}{y} = \cc{\ip{y}{x}}\text{;}\)
\(c\ip{x}{y} = \ip{cx}{y}\text{;}\)
\(\ip{x + y}{z} = \ip{x}{z} + \ip{y}{z}\text{;}\)
\(\ip{x}{x} > 0\) if \(x \neq 0\text{.}\)
An
inner product space is a pair consisting of a complex vector space
\(V\) and an inner product
\(\ip{\cdot}{\cdot}\) on
\(V\text{.}\) An inner product space is can also be called a “pre-Hilbert space”.
Checkpoint 2.1.2.
Show that the map
\begin{equation}
\ip{f}{g} = \int_0^1 f(t)\cc g(t) \, \dd t\tag{2.1.1}
\end{equation}
defines an inner product on \(C[0,1]\text{,}\) the complex vector space of all continuous complex-valued functions on the the interval \([0,1]\) with pointwise addition and scalar multiplication.
Checkpoint 2.1.3.
Recall that the
trace of a square matrix is the sum of its diagonal entries. Show that the map
\begin{equation*}
\ip{A}{B} = \trace(B\ad A)
\end{equation*}
is an inner product on the vector space \(\C^{m \times n}\) of \(m \times n\) matrices with complex entries, where \(\ad\) denotes the conjugate transpose.
The basic properties in
Definition 2.1.1 can be used to derive the following statements, which show that a complex inner product is
linear in the first argument and
conjugate linear in the second.
Theorem 2.1.4.
Let \(V\) be an inner product space. For any \(x, y, z \in V\) and \(c \in \C\text{,}\)
\(\ip{x}{y+z} = \ip{x}{y} + \ip{x}{z}\text{;}\)
\(\displaystyle \ip{x}{c y} = \cc{c}\ip{x}{y};\)
\(\ip{x}{0} = 0 = \ip{0}{x}\text{;}\)
if \(\ip{x}{z} = \ip{y}{z}\) for all \(z \in V\text{,}\) then \(x = y\text{.}\)
Proof.
\begin{align*}
\ip{x}{y+z} \amp = \cc{\ip{y+z}{x}}\\
\amp = \cc{(\ip{y}{x} + \ip{z}{x})}\\
\amp = \cc{\ip{y}{x}} + \cc{\ip{z}{x}}\\
\amp = \ip{x}{y} + \ip{x}{z}
\end{align*}
Parts 2, 3 are left as an exercise.
(4): If \(\ip{x}{z} = \ip{y}{z}\text{,}\) then
\begin{align*}
0 \amp = \ip{x}{z} - \ip{y}{z}\\
\amp = \ip{x}{z} + \ip{-y}{z}\\
\amp = \ip{x-y}{z}.
\end{align*}
Assuming that this statement holds for all
\(z \in V\) means that it holds for
\(z = x-y\text{.}\) But then
\(\ip{x-y}{x-y} = 0\text{,}\) so by
Definition 2.1.1 (4), it must be that
\(x - y = 0\) and so
\(x = y\text{.}\)
Checkpoint 2.1.5.
To extend the notion from \(\C^n\) to an infinite dimensional analogue of “infinite vectors”, we might naively propose the inner product
\begin{equation}
\ip{x}{y} = \sum_{i=1}^\infty \cc y_i x_i,\tag{2.1.2}
\end{equation}
though this leaves the question of what space should go with this definition. One major concern is that an infinite sum need not converge, and we would certainly prefer that the inner product be defined on the vectors that we apply it to. We can’t use the obvious idea, which is to consider the space \(C^{\mathbb N} = \C \times \C \times \ldots\text{,}\) since it is easy to find many pairs of poorly behaved vectors. (For example, if \(x = (1, 1, \ldots), y = (1, 1, \ldots)\text{,}\) then \(\ip{x}{y} = \sum_{1}^\infty 1\text{.}\))
One solution is to ensure that the sequence of coordinates in vectors that we work with are very well behaved in infinite sums. A very nice subspace of \(\C^\mathbb{N}\) is “little l-two”, denoted \(\ell^2\text{.}\)
Definition 2.1.6.
\(\ell^2\) is the complex vector space of all complex sequences \(x = (x_i)_{i=1}^\infty\) which are square summable and equipped with componentwise addition and scalar multiplication; that is,
\begin{equation*}
x \in \ell^2 \iff \sum_{i=1}^n \abs{x_i}^2 \lt \infty.
\end{equation*}
\(\ell^2\) is an inner product space with the inner product given by
\begin{equation*}
\ip{x}{y} = \sum_{i=1}^\infty \cc y_i x_i.
\end{equation*}