Consider a matrix $A\in\mathbb{R}^{2\times3}$. If I perform its Singular Value Decomposition, I'll get the following factorization-
$$A=U\Lambda V^T:U\in\mathbb{R}^{2\times2},~\Lambda\in\mathbb{R}^{2\times3},~V\in\mathbb{R}^{3\times3}$$
The non-zero values of the matrix $\Lambda$ would be its $(1,1)$ and $(2,2)$ elements (assuming a non-singular matrix A). Since the 3rd column of $\Lambda$ is entirely $0$, in the product $U\Lambda V^T$, the 3rd row of $V$ (also 3rd eigen-vector of $A^TA$) would contribute nothing towards the product of $A$. I could in theory replace that vector with anything, and the product $U\Lambda V^T=A$ would remain unchanged.
My question is, then how is the 3rd eigen-vector, in this case, uniquely determined if it contributes nothing to the product and changing it also doesn't change the product (I checked. It doesn't.)? Extending this argument, how are the eigen-vectors corresponding to $0$ eigen-values/singular values uniquely pinned down? Where is that degree of freedom going?

In singular value decomposition, the matrices $U$ and $V$ are orthonormal. That means i.e. that the Columns of $V$ and $U$ are orthonormal bases of the source to and goal space from which the matrix $A$ maps.
The columns in $V$ that correspond to the eigenvectors to eigenvalue $0$ are an orthonormal basis of the $kernel(A)$. Those are the components of directions $x$ which contribute nothing to $Ax$.
The columns in $U$ that correspond to the eigenvectors to eigenvalue $0$ are an orthonormal basis of $image(A)^\perp$. They correspond to the directions $Ax$ cant reach.