I want to prove that:
If the density matrix $$\gamma(t):=\sum_{j=1}^N|u_j(t)\rangle\langle u_j(t)|=\sum_{j=1}^N\langle u_j(t),\cdot\rangle u_j$$ solves $$i\partial_t\gamma(t)=[-\Delta+w*\rho,\gamma],$$ then the $u_1,...,u_N$ solve the system $$i\partial _tu_j=(-\Delta+w*\rho)u_j,\qquad j=1,...,N,$$ where $\rho(t,x)=\sum_{j=1}^N|u_j(t,x)|^2$.
My attempt at proof:
Suppose the density matrix solves density matrix $\gamma(t):=\sum_{j=1}^N|u_j(t)\rangle\langle u_j(t)|=\sum_{j=1}^N\langle u_j(t),\cdot\rangle u_j$ solves $$i\partial_t\gamma(t)=[-\Delta+w*\rho,\gamma].$$ Then, denoting $H=-\Delta+w*\rho$,
\begin{align*} (i\partial_t\gamma)u_i=i\sum_{j=1}^N({\langle{\partial_tu_j(t),u_i(t)}\rangle u_j(t)+\langle{u_j(t),u_i(t)}\partial_tu_j(t)})&=i\sum_{j=1}^N\langle{\partial_tu_j(t),u_i(t)}\rangle u_j(t)+i\partial_tu_i(t)\\ &=-i\sum_{j=1}^N \langle{u_j(t),\partial_tu_i}\rangle u_j(t)+i\partial_tu_i(t)\\ &=-\sum_{j=1}^N \langle {u_j(t),i\partial_tu_i}\rangle u_j(t)+i\partial_tu_i(t)\\ &=-\gamma(i\partial_tu_i)+i\partial_t(\gamma u_i)\\ &=[{i\partial_t,\gamma}]u_i. \end{align*}
So because $i\partial_t\gamma=[{H,\gamma}]$, we deduce\begin{align*} 0=[{i\partial_t-H,\gamma}]u_i=(i\partial_t-H)u_i-\gamma((i\partial_t-H)u_i) \end{align*} But this is not enough...
Thanks so much in advance!
Let me slightly reformulate and extend your question: if $\gamma(t)$ is the solution to a (time-dependent) Liouville-von Neumann equation $$ i\dot\gamma(t)=[H(t),\gamma(t)]\tag{1} $$ and if this solution diagonalizes as $\gamma(t)=\sum_j|u_j(t)\rangle\langle u_j(t)|$ with $u_j(t)$ pairwise orthogonal, then you're asking whether $$ i\dot u_j(t)=H(t)u_j(t)\tag{2} $$ holds for all $j$. In other words your question in its current form asks:
The answer to this question is no (even for pure states); roughly speaking the problem is that you have to be careful when diagonalizing your state $\gamma(t)$. For a simple example consider $H$ to be the Pauli matrix $\sigma_z$ and $$ \gamma(t)=\frac12\begin{pmatrix}1&e^{-2it}\\e^{2it}&1\end{pmatrix}=|u(t)\rangle\langle u(t)|\qquad\text{ for }u(t):=\frac1{\sqrt2}\begin{pmatrix} e^{-2it}\\1\end{pmatrix}\,.\tag{3*} $$ One readily verifies that $\gamma(t)$ satisfies (1) when choosing $H(t)\equiv H$. However, $$ i\dot u(t)=\sqrt2\begin{pmatrix}e^{-2it}\\0\end{pmatrix}\neq \frac1{\sqrt2}\begin{pmatrix} e^{-2it}\\-1\end{pmatrix} =Hu(t)\,. $$ The problem here is that each eigenstate $u_j$ can be replaced by $e^{it}u_j$ without changing $\gamma$: note that $|e^{it}u_j\rangle\langle e^{it}u_j|=e^{it}e^{-it}|u_j\rangle\langle u_j|=|u_j\rangle\langle u_j|$, but -- while the projector $|u_j\rangle\langle u_j|$ is unaffected by such a time-dependent phase -- the derivative of $e^{it}u_j$ may look very different from $\dot u_j$. Indeed, decomposing $$ \gamma(t)=\frac12\begin{pmatrix}1&e^{-2it}\\e^{2it}&1\end{pmatrix}=|v(t)\rangle\langle v(t)|\qquad\text{ where now }v(t):=\frac1{\sqrt2}\begin{pmatrix} e^{-it}\\e^{it}\end{pmatrix} $$ (i.e. $v(t)=e^{it}u(t)$) we find $$ i\dot v(t)=\frac1{\sqrt2}\begin{pmatrix} e^{-it}\\-e^{it}\end{pmatrix}=Hv(t)\tag{3} $$ as desired. [NB: This is also the reason why your proof attempt did not lead anywhere: you assumed the $u_j(t)$ to be an arbitrary diagonalization of $\gamma(t)$ which as we just saw does not necessarily reduce to the Schrödinger equation]
Either way the question we should ask is:
This statement turns out to be correct, and the key to proving this is to consider the operator lift of (2): $$ i\dot U(t)=H(t)U(t)\tag{4} $$ Given some initial state $\gamma_0$ and any unitary diagonalization, i.e. $\gamma_0=U_0D_0U_0^*$, $U_0$ unitary, $D_0=\operatorname{diag}(r_1,\ldots,r_n)$ diagonal this shift in perspective yields a solution to (1):
While this is straightforward let us quickly verify this anyway. The initial condition is met because $\gamma(0)=U(0)D_0U(0)^*=U_0D_0U_0^*=\gamma_0$. More importantly, the product rule yields \begin{align*} i\dot\gamma(t)&=i\dot U(t)D_0U(t)^*+iU(t)D_0\dot U(t)^*\\ &=i\dot U(t)D_0U(t)^*-U(t)D_0(i\dot U(t))^*\\ &\overset{(4)}= H(t)U(t)D_0U(t)^*-U(t)D_0(H(t)U(t))^*\\ &=[H(t),U(t)D_0U(t)^*]=[H(t),\gamma(t)] \end{align*} so, indeed, (1) holds. $\square$
Now these propagators $U(t)$ "contain" the diagonalization you are looking for:
Again, this is straightforward: on the one hand \begin{align*} \sum_j|u_j(t)\rangle\langle u_j(t)|&=\sum_j r_j|U(t)e_j\rangle\langle U(t)e_j|\\ &=U(t)\Big(\sum_j r_j|e_j\rangle\langle e_j|\Big)U(t)^*=U(t)D_0U(t)^*=\gamma(t) \end{align*} and on the other hand $$ i\dot u_j(t)=\sqrt{r_j}i\dot U(t)e_j\overset{(4)}= \sqrt{r_j}H(t)U(t)e_j=H(t)\big( \sqrt{r_j}U(t)e_j \big)=H(t)u_j(t)\,.\tag*{$\square$} $$ Finally, let us conduct a quick sanity check by verifying that this result reproduces what we found in the above example (3). There \begin{align*} \gamma_0&=\frac12\begin{pmatrix}1&1\\1&1\end{pmatrix}=\underbrace{\Big|\frac1{\sqrt2}\begin{pmatrix}1\\1\end{pmatrix} \Big\rangle}_{=:u_0}\Big\langle \frac1{\sqrt2}\begin{pmatrix}1\\1\end{pmatrix} \Big|\\ &=\underbrace{\frac1{\sqrt2}\begin{pmatrix}1&1\\1&-1\end{pmatrix}}_{=:U_0}\underbrace{\begin{pmatrix}1&0\\0&0\end{pmatrix}}_{=:D_0}\Big(\frac1{\sqrt2}\begin{pmatrix}1&1\\1&-1\end{pmatrix}\Big)^* \end{align*} so our result says that the (non-trivial) eigenvector of $\gamma(t)$ which satisfies the Schrödinger equation are given by \begin{align*} u_1(t)=\sqrt1 U(t)e_1=e^{-i\sigma_zt}U_0e_1=e^{-i\sigma_zt}\psi_0=\begin{pmatrix}e^{-it}&0\\0&e^{it}\end{pmatrix}\frac1{\sqrt2}\begin{pmatrix}1\\1\end{pmatrix}=\frac1{\sqrt2}\begin{pmatrix} e^{-it}\\e^{it}\end{pmatrix} \end{align*} which reproduces (3), as expected.