Suppose I have the discrete-time linear system $x_{k+1}=A\,x_k+B\,u_k$ and the cost to be minimized
\begin{equation} J(x_0) = x_N^\top\,Q_N\,x_N + \sum_{k=0}^{N-1} (x_k^\top\,Q\,x_k + u_k^\top\,R\,u_k) \end{equation}
where $Q,Q_N \geq 0, R>0$. When I apply the controller $u_k=K_k\,x_k$, the Riccati equation becomes
\begin{equation} (A+B\,K_k)^\top P_{k+1}(A+B\,K_k)-P_{k}+ K_k^\top\,R\,K_k+Q = 0 \end{equation}
How can be proved that $K_{k}$ is unique?
I know $P_{k}$ is uniquely determined using dynamic programming running backward iterations starting from $P_N=Q_N$ and thus $K_k$ should also be unique and optimal.
Is there a way to relate uniqueness for these two equivalent problem formulations?
The function $J_k=x_k^\top P_k\,x_k$ can be seen as the cost to go from $k$ to $N$. Using only the last terms of your cost function yields a cost from $N-1$ to $N$ of
\begin{align} J_{N-1} &= x_N^\top\,Q_N\,x_N + x_{N-1}^\top Q\,x_{N-1} + x_{N-1}^\top K_{N-1}^\top R\,K_{N-1}\,x_{N-1}, \\ &= x_{N-1}^\top (A+B\,K_{N-1})^\top P_N\,(A+B\,K_{N-1})\,x_{N-1} + x_{N-1}^\top Q\,x_{N-1} + x_{N-1}^\top K_{N-1}^\top R\,K_{N-1}\,x_{N-1}, \tag{1} \\ &= x_{N-1}^\top \left((A+B\,K_{N-1})^\top P_N\,(A+B\,K_{N-1}) + Q + K_{N-1}^\top R\,K_{N-1}\right)x_{N-1}. \end{align}
This is equivalent to using $J_{N-1}=x_{N-1}^\top P_{N-1}\,x_{N-1}$ with
$$ P_{N-1} = (A+B\,K_{N-1})^\top P_N\,(A+B\,K_{N-1}) + Q + K_{N-1}^\top R\,K_{N-1}. \tag{2} $$
We now know the cost to go from $N-1$ to $N$, extending this one further would then be equivalent to adding the cost at $N-2$ to it. This yields a similar equation as when we calculated $J_{N-1}$ but now using $P_{N-1}$ instead of $Q_N=P_N$, which would then yield
$$ P_{N-2} = (A+B\,K_{N-2})^\top P_{N-1}\,(A+B\,K_{N-2}) + Q + K_{N-2}^\top R\,K_{N-2}. \tag{3} $$
This can be repeated for all $k$ using the generalized equation
$$ P_k = (A+B\,K_k)^\top P_{k+1}\,(A+B\,K_k) + Q + K_k^\top R\,K_k, \tag{4} $$
but this is just your Riccati difference equation. So the backwards iterations indeed yield the same Riccati difference equation, given you have a known control policy $K_k$.
The optimal $K_k$ can be found by minimizing $J_k$. From dynamical programming it follows that the solution for the optimal control policy at the last time step should also be optimal when combined with the previous time step. This can then be used to find the optimal control policy at the second last time step. This can repeated until you arrive at $k=0$. So minimizing $J_0$ is equivalent to minimizing the all eigenvalues of $P_k$ at every time step $k$. It can be shown that $(4)$ is equivalent to
$$ P_k = (K^* - K_k)^\top M (K^* - K_k) + S, \tag{5} $$
with
\begin{align} M &= B^\top P_{k+1}\,B + R, \\ K^* &= -M^{-1} B^\top P_{k+1}\,A, \\ S &= Q + A^\top P_{k+1}\,A - {K^*}^\top M\,K^*. \end{align}
The matrix $P_k$ will always be minimized by choosing $K_k = K^*$ since $M$ is a positive definite matrix, so any deviation from $K^*$ can only increase $P_k$. Therefore $K_k=K^*$ has to be the optimal control policy.