I've been going through the MIT OpenCourseWare Linear Algebra course, and I've been understanding it until early in the Singular Value Decomposition video.
The Professor says (and I'm paraphrasing) starting at the 3:12 mark of the video: "You remember the picture of a linear transformation. A typical vector $v_1$ in the rowspace gets mapped to some vector in the column space, say $u_1 = Av_1$. What I'm looking for in an SVD is an orthogonal basis in the row space that gets mapped to an orthogonal basis in the column space."
I don't understand the "$v_1$ is a vector in the rowspace" part. For example, if $A$ is a 3x3 matrix of all 1's, then the rowspace is just the line through $<1,1,1>$ while $v_1$ can be any vector. In that case $v_1$ is certainly not in the rowspace.
Can someone provide better intuition for what SVD is doing geometrically?