Suppose i have a control system which is described as:
$$ \left\{ \begin{array}{c} \dot{x}(t)=Ax(t)+Bu(t)\\ y(t)=Cx(t)+Du(t) \end{array} \right. $$
and i know it is controllable. I use a state feedback $u(t) = -Kx(t)+r(t)$ on it, which makes it:
$$\dot{x}(t)=(A-BK)x(t)+Br(t)$$
From what i understand, even if $A$ is not stable (meaning not all the real parts of its eigenvalues are negative), if I use the state feedback it is always possible to make it stable because it is controlable. why is this true? and how can i see this for $x\in\mathfrak{R}^n$. (no full proof is needed, a general direction is good enough)
I understand that the requirement for stability is that $\hat{A}=(A-BK)$ would have only eigenvalues with real parts that are negative, but why is this always possible?

For single input systems you can directly calculate the $k^T \in \mathbb{R}^n$ vector such that $A-bk^T$ has a desired characteristic polynomial using Ackermann's formula, if the system is controllable. To see why it works there is a nice derivation here: http://www.cambridge.org/us/features/chau/webnotes/chap9acker.pdf
For multiple input systems you can select an arbitrary fan-out vector $f \in \mathbb{R}^m$ and calculate $k^T$ for the system $(A, Bf)$, which is a single input system now. This is called dyadic approach and $f$ can be selected for some additional design criteria, such as robustness. Then the state feedback gain would be $K = fk^T$.
There are other approaches as well for selecting $K$ for multi-input systems.