Let \(A\) be a \(n\times k\) matrix with \(r\) pivots (to anticipate a future topic, we will call the number of pivots in \(A\) to be is rank). Let \(P\) be the \(n\times r\) matrix whose columns are the pivot columns of \(A\). Let \(R\) be the reduced row echelon form of \(A\), and let \({\tilde R}\) be the \(r\times k\) matrix obtained from \(R\) by dropping its all-zero rows.
The observation that \(A = P{\tilde R}\) captures the information the reduced row echelon form provides:
Writing \(A = P{\tilde R}\) is an example of matrix factorization. Suppose \(A\) is a matrix with only non-negative entries. Many applications, in astronomy, computer vision, bioinformatics, recommender systems, signal processing and natural language processing, require factors of \(A\) whose entries are also all non-negative, and in addition with as small a rank as possible. The above factorization into rank \(r\) matrices will not cut it, since the entries of \(\tilde R\) can be negative even if all the entries of \(A\) are non-negative. But \(r\) remains a lower bound on the rank of the factors for exact non-negative factorization.
Pick \(\bf v\) to be any vector with 3 coordinates. For your vector, perform elimination on \(I_3 + {\bf v}{\bf v}^T\), and multiply all the pivots. You will find three pivots. Find the product of all the pivots, and verify the product equals \(1 + ||{\bf v}||^2\). The equality holds no matter what \(\bf v\) you take. In fact, the equality can be generalized to vectors with any number of coordinates.
As you can see from the problems above, you can think far deeper into the Gaussian Elimination algorithm than is apparent.