V = {(4 1 2); (1 0 -3); (5 1 -2); (-1 0 2)} U = {(2 -5 9 -6); (-2 -2 9 1); (-1 -5 0 -6); (1 4 -27 -2)}
If V is a basis, there exists only one linear transformation that maps the vectors of V into the vectors of U. If the vectors of V are linearly independent but not a basis, there is an infinite number of linear transformations. If vectors of V are linearly dependent, then it could exist none, one, or an infinite number of linear transformations, depending on the vectors of U. All these problems can be solved algebraically or using matrices, but it will be used the second way, which is considered the easiest way.
Write the matrix composed by the vectors as columns.
┌ ┐ │ 4 1 5 -1 │ │ 1 0 1 0 │ │ 2 -3 -2 2 │ └ ┘
Transform the matrix to row echelon form.
r1 <———> r2 ┌ ┐ │ 1 0 1 0 │ │ 4 1 5 -1 │ │ 2 -3 -2 2 │ └ ┘ r2 <———> r2 - 4•r1 r3 <———> r3 - 2•r1 ┌ ┐ │ 1 0 1 0 │ │ 0 1 1 -1 │ │ 0 -3 -4 2 │ └ ┘ r3 <———> r3 + 3•r2 ┌ ┐ │ 1 0 1 0 │ │ 0 1 1 -1 │ │ 0 0 -1 -1 │ └ ┘
Determine the rank.
Rank(V) = 3
Solution.
V is a linearly dependent set of vectors.
Select one column for each step.
Column indexes = {1, 2, 3}
Solution.
A = {(4 1 2); (1 0 -3); (5 1 -2)}
Select the subset of vectors composed by the vectors which are not in the subset of linearly independent vectors. Let's call it C.
C = {(-1 0 2)}
Check for each vector in C, that the coefficients when expressing this vector as a linear combination of the vectors of A are equal to the ones obtained when expressing its image as a linear combination of the images of the vectors of A, in the same order.
f(-1 0 2) = (1 4 -27 -2) (-1 0 2) = - 1•(4 1 2) - 2•(1 0 -3) + 1•(5 1 -2) f(-1 0 2) = - 1•f(4 1 2) - 2•f(1 0 -3) + 1•f(5 1 -2) (1 4 -27 -2) = - 1•(2 -5 9 -6) - 2•(-2 -2 9 1) + 1•(-1 -5 0 -6) (1 4 -27 -2) = (1 4 -27 -2)
Process was successful for all the vectors of C, therefore any linear transformation that transforms the vectors of A to their corresponding images, will transform the vectors of C to their corresponding images.
The set of vectors A is a basis for the input space. Let's call it B.
U = {(2 -5 9 -6); (-2 -2 9 1); (-1 -5 0 -6)}
Calculate the determinant.
│ 4 1 5 │ │ 1 0 1 │ = 4•0•(-2) + 1•1•2 + 1•(-3)•5 - 5•0•2 - 1•1•(-2) - 1•(-3)•4 = 1 │ 2 -3 -2 │
Calculate the matrix of cofactors.
┌ ┐ │ 3 4 -3 │ Cof(B) = │ -13 -18 14 │ │ 1 1 -1 │ └ ┘
Transpose the matrix of cofactors to obtain the adjugate matrix.
┌ ┐ │ 3 -13 1 │ Adj(B) = │ 4 -18 1 │ │ -3 14 -1 │ └ ┘
Divide each entry of the adjugate matrix by the determinant to obtain the inverse.
┌ ┐ │ 3 -13 1 │ Inv(B) = │ 4 -18 1 │ │ -3 14 -1 │ └ ┘
┌ ┐ │ 1 -4 1 │ │ -8 31 -2 │ U•Inv(B) = │ 63 -279 18 │ │ 4 -24 1 │ └ ┘
f(x, y, z) = (x-4y+z; -8x+31y-2z; 63x-279y+18z; 4x-24y+z)