Given a positive n-dimensional vector $\mathbf{z}$ (all its elements are positive), my goal is to project it to a unit hypercube $[0,1]^n$. However my projection is defined with respect to KL-divergence instead of Euclidean distance. That is, denote a KL-divergence as
$f(\mathbf{x}) = D_{KL}(\mathbf{x},\mathbf{z}) = \sum_{i=1}^n x_i \log (x_i/z_i) - x_i$
It is a convex function with respect to $\mathbf{x}$ and thus, the following projection problem has a well-defined solution
$\mathbf{x^*}= \operatorname{argmin}_{\mathbf{x} \in hypercube} f(\mathbf{x}) $
By analytically solving this problem I get to this solution that
$x_i^*=min(1,z_i)$ where $x_i^*$ are elements of the vector $\mathbf{x}^*$
However it is very surprising because projection with respect to Euclidean distance ($\min ||\mathbf{x}-\mathbf{z}||_2$) also leads to the exact same solution.
Did I make a mistake in my solution and the answer is not $x_i^*=min(1,z_i)$ or it was obvious from the beginning or there is a deeper concept here that I missed? (or it is just a coincidence)
I think you are right. Here it is my explanation.
Both objective function and constraints are separable and thus you can just think in terms of a single pair $x,z\in R$. Given that, $D(x,z)$ is strictly convex and we have that
$$D'(x,z) = log(x)-log(z)$$
and therefore it attains its minimum for $x=z$. It follows that either
$z\in [0,1]$ and thus $x^\star=z$ because $z$ is attainable by $x$,
or due to strict convexity of $D$; $x$ will attain the closest value to $z$ on the boundary of $[0,1]$, but being $z>1$ that means $x^\star=1$.
Thus to summarize $x^\star=\min(1,z)$. Then you extend to all coordinates.
Of course you can repeat the reasoning using Lagrangian multipliers, which is maybe more sound. This helps as an alternative and double check.