Suppose I want to solve a simple, linear inverse problem given by $\mathbf{y} = \mathbf{A} \cdot \mathbf{c}$ where $\mathbf{A}$ is an $M \times K$ matrix and I want to solve for $\mathbf{c}$ ($M$ = measurements, $K$ = unknowns). Generally, $\mathbf{y}$ is contamined by noise and with $M > K$, we use least squares (pseudo inverse).
A common wisdom is that the condition number describes the fidelity of the reconstruction, i.e., the higher the condition number, the better the reconstruction. So I just draw random matrices and plot the condition number versus the achieved NMSE. I would expect a rough relationship between the two. But what I see is all but correlated:

I hope this scatter plot clearly shows that a low condition number does not mean low reconstruction error (and the other way round). I changed the parameters but did not have significant changes.
This is counter-intuitive and wrong. But what I am doing wrong? What are my wrong assumptions, if any?
PS: The (short&simple) MATLAB code can be found here if it helps: http://pastebin.com/dhQ5PaiS
I think you understood something wrong. The condition number is a proportionality factor in the error, the higher the condition number, i.e., the more singular or nearly rank deficient the system matrix $A$ is, the more the noise in $y$ gets amplified in the solution.
If thinking in terms of the SVD, when solving the system resp. constructing the pseudo-inverse you multiply with the inverses of the singular values. The existence of relatively small singular values means that the corresponding singular vectors get a rather huge factor in the pseudo-inverse.
That is, if $A=USV^T$ and $y=Ax+\varepsilon$, then $$ A^+y=x+\sum_{k=1}^m σ_k^{-1} v_k\,(u_k^Tε) =x+σ_1^{-1}\sum_{k=1}^m (σ_1σ_k^{-1})\,(u_k^Tε)\, v_k $$ and if $ε$ is normal iid, then also the $(u_k^Tε)$ are normal iid., and the largest error component is the last one with the condition number of $A$ as relative size.
Update If you construct the matrix $A$ by filling it with random numbers centered at $0$, independent and identically distributed, then with high probability $A$ will end up having columns that are of almost equal magnitude and that are close to orthogonal. This is just the law of large numbers. Such a matrix has automatically a rather small condition number.
To get "bad" matrices, construct them via the SVD. Fill $U\in\Bbb R^{N\times m}$ and $V\in\Bbb R^{m\times m}$ randomly, as before, and set $S=diag(1,q,q^2,...,q^{m-1})$ for some $q<1$. $q^{1-m}$ will then be the magnitude of the condition number of $A=USV^T$.
If desired, one can extract the "truly" orthogonal content of the randomly generated $U$ and $V$ by performing QR decompositions and replacing them with the Q factor.