The orthogonal Procrustes problem can be stated as finding the orthogonal matrix $\Omega$ that maps $A$ most closely to $B$
$$\arg\min_{\Omega}\|A\Omega - B\|_F \quad\mathrm{subject\ to}\quad \Omega^T \Omega=I$$
The solution is well known and found by computing $M = A^TB$ and $M =U \Sigma V^T$, from which $\Omega = UV^T$.
See: http://nemo.nic.uoregon.edu/wiki/images/0/07/Psychometrika_1966_Sch%C3%B6nemann_A_generalized_solution_of_the.pdf or https://en.wikipedia.org/wiki/Orthogonal_Procrustes_problem (for a slightly different variant)
As far as I understand this is valid for real matrices. I cannot find much information about the same problem for complex matrices. So my question is if the same solution is valid for complex matrices?
In other words if for the problem
$$\arg\min_{\Omega}\|A\Omega - B\|_F \quad\mathrm{subject\ to}\quad \Omega^* \Omega=I$$
$\Omega=UV^*$ from $M=A^*B$ and $M=U \Sigma V^*$ is a solution for complex matrices $A$ and $B$, with $^*$ the conjugate transpose?
Update
Following the comments I tried (numerically on a small example) using the standard form of a complex number. I tried with just a complex vector $a \in \mathrm{C}^{n \times 1}$, which I transformed into a real valued matrix $A \in \mathrm{R}^{2n \times 2}$. Using $a$ or $A$ gave me exactly the same results. However, I'd still like to find a proof or some reference explicitly stating that complex matrices can be used.
Update 2
I've borrowed the book Procrustes Problems by J. C. Gower and G. B. Dijksterhuis in which they write in chapter 14 (page 188):
Two-dimensional configurations may be represented in complex space ... Some find the complex representation 'elegant' but it has its counterpart in terms of $N \times 2$ and $2 \times 2$ matrices. Complex variable representations do not generalise to three or more dimensions ...
I take this to mean that complex vectors can be treated with the equations above (as they can be represented as $N \times 2$ matrices (should it not be $2N$?)), but complex matrices cannot.
Yes and the proof is easy. Since $\|A\Omega-B\|_F^2$ is the sum of $2\Re\operatorname{tr}(B^\ast A\Omega)$ plus a constant, if $A^\ast B=U\Sigma V^\ast$ is a SVD, you are essentially maximising $\Re\operatorname{tr}(V\Sigma U^\ast\Omega)=\Re\operatorname{tr}(\Sigma (U^\ast\Omega V))$. Let $\Sigma=S\oplus0$ where $S$ is a positive diagonal submatrix of size $k$. Since all diagonal entries of $U^\ast\Omega V$ have moduli $\le1$, $\Re\operatorname{tr}(\Sigma (U^\ast\Omega V))$ is maximised if and only if the first $k$ diagonal entries of $U^\ast\Omega V$ are equal to $1$, i.e. the set of all maximisers are given by $U^\ast\Omega V=I_k\oplus W$, i.e. $\Omega=U(I_k\oplus W)V$ for some unitary matrix $W$. In particular, $\Omega=UV^\ast$ is always a global maximiser and it is the unique maximiser if and only if all singular values of $A^\ast B$ are nonzero.
E.g. When $U=V=I$ and $\Sigma=\operatorname{diag}(\sigma_1,\ldots,\sigma_{n-1},0)$, then $\Omega=\operatorname{diag}(1,\ldots,1,e^{i\theta})$ gives the same maximum possible objective function value for every $\theta\in\mathbb R$.