Skip to main content

Section 1.3 Basis and coordinates

Let \(V\) be a vector space over a field \(\F\text{.}\) Recall that a (finite) set of vectors \(S \subset V\) is linearly independent if only the trivial solution exists for the equation
\begin{equation} 0 = \sum_\mathcal{I} c_i v_i.\tag{1.3.1} \end{equation}
A set \(S\) of vectors in \(V\) is said to span \(V\) if every vector in \(V\) can be realized as a linear combination of vectors in \(S\text{.}\) That is, given \(v \in V\text{,}\) there exist coefficients \(c_i\) so that
\begin{equation*} v = \sum_{\mathcal I} c_i v_i. \end{equation*}
A basis \(\mathcal V\) for \(V\) is a subset of \(V\) so that \(\mathcal V\) is linearly independent and \(\mathcal V\) spans \(V\text{.}\) It is a major result that every vector space has a basis. The full result requires the invocation of Zorn’s Lemma
 1 
en.wikipedia.org/wiki/Zorn%27s_lemma
or other equivalents of the axiom of choice
 2 
en.wikipedia.org/wiki/Axiom_of_choice
and will not be proven here. (A nice argument can be found here
 3 
www.math.lsa.umich.edu/~kesmith/infinite.pdf
.) Our interest is in modeling vector spaces the carry the logic and structure of Euclidean space. The dimension of \(V\) is the order of a basis \(\mathcal V\text{.}\) If the basis has a finite number of elements, say \(n\text{,}\) then \(V\) is called finite dimensional. In particular, (and clearly providing motivation for the definition), \(\dim \R^n = n\text{.}\)
Suppose that \(V\) is a finite dimensional vector space with a basis \(\mathcal V\text{.}\) Let \(v\) be a vector in \(V\text{.}\) Then the coordinates of \(v\) with respect to \(\mathcal V\) are the constants \(c_i\) so that \(v = \sum_{\mathcal I} c_i v_i\text{.}\) These coordinates are unique once we have fixed a basis \(\mathcal V\text{.}\) That is, we have a bijective relationship between the vectors \(v \in V\) and the coordinate representations \(\bbm c_1 \\ \vdots \\ c_n \ebm \in \F^n\text{.}\) In \(\F^n\text{,}\) the coordinate representation of a vector is straightforward to compute using the dot product.
Furthermore, we can use the coordinate representation to write representing matrices for linear functions \(T:V \to W\text{.}\) Suppose that \(V, W\) are vector spaces of dimension \(m,n\) respectively over \(\F\text{.}\) Then
where \(A\) is the matrix that represents \(T\) and \(i\) is the natural bijection- the coordinatization - between \(V, W\) and \(\F^m, \F^n\) respectively. We should note that matrix multiplication is defined so that
reduces to the diagram
That is, the representing matrix of a composition is the product of the representing matrices of the functions.
Any basis of a vector space can be replaced with an equivalent basis of orthonormal vectors - the algorithm for creating an orthonormal basis from a basis is called the Gram-Schmidt process
 4 
en.wikipedia.org/wiki/Gram%E2%80%93Schmidt_process
.