Section 1.3 Basis and coordinates
Let \(V\) be a vector space over a field \(\F\text{.}\) Recall that a (finite) set of vectors \(S \subset V\) is linearly independent if only the trivial solution exists for the equation
\begin{equation}
0 = \sum_\mathcal{I} c_i v_i.\tag{1.3.1}
\end{equation}
A set \(S\) of vectors in \(V\) is said to span \(V\) if every vector in \(V\) can be realized as a linear combination of vectors in \(S\text{.}\) That is, given \(v \in V\text{,}\) there exist coefficients \(c_i\) so that
\begin{equation*}
v = \sum_{\mathcal I} c_i v_i.
\end{equation*}
A basis
\(\mathcal V\) for
\(V\) is a subset of
\(V\) so that
\(\mathcal V\) is linearly independent and
\(\mathcal V\) spans
\(V\text{.}\) It is a major result that every vector space has a basis. The full result requires the invocation of
Zorn’s Lemma or other equivalents of the
axiom of choice and will not be proven here. (A nice argument can be found
here.) Our interest is in modeling vector spaces the carry the logic and structure of Euclidean space. The
dimension of
\(V\) is the order of a basis
\(\mathcal V\text{.}\) If the basis has a finite number of elements, say
\(n\text{,}\) then
\(V\) is called finite dimensional. In particular, (and clearly providing motivation for the definition),
\(\dim \R^n = n\text{.}\)
Suppose that \(V\) is a finite dimensional vector space with a basis \(\mathcal V\text{.}\) Let \(v\) be a vector in \(V\text{.}\) Then the coordinates of \(v\) with respect to \(\mathcal V\) are the constants \(c_i\) so that \(v = \sum_{\mathcal I} c_i v_i\text{.}\) These coordinates are unique once we have fixed a basis \(\mathcal V\text{.}\) That is, we have a bijective relationship between the vectors \(v \in V\) and the coordinate representations \(\bbm c_1 \\ \vdots \\ c_n \ebm \in \F^n\text{.}\) In \(\F^n\text{,}\) the coordinate representation of a vector is straightforward to compute using the dot product.
Theorem 1.3.1.
Let \(e_1, \ldots e_m\) be an orthonormal basis for \(\F^m\) and \(v \in \F^n\text{.}\) Then the \(n\)th coordinate of \(v\) with respect to the basis is \(\ip{v}{e_n}\text{,}\) and the expansion of \(v\) with respect to the basis is
\begin{equation*}
v = \sum_{1}^m \ip{v}{e_i} e_i.
\end{equation*}
Furthermore, we can use the coordinate representation to write representing matrices for linear functions
\(T:V \to W\text{.}\) Suppose that
\(V, W\) are vector spaces of dimension
\(m,n\) respectively over
\(\F\text{.}\) Then
where
\(A\) is the matrix that represents
\(T\) and
\(i\) is the natural bijection- the coordinatization - between
\(V, W\) and
\(\F^m, \F^n\) respectively. We should note that matrix multiplication is defined so that
reduces to the diagram
That is, the representing matrix of a composition is the product of the representing matrices of the functions.
Any basis of a vector space can be replaced with an equivalent basis of orthonormal vectors - the algorithm for creating an orthonormal basis from a basis is called the
Gram-Schmidt process.