Consider a random variable $X$ that is uniform on $[0,1]^2$.
I want to send this variable to a receiver, using $2k$ bits, with an encoding $Y$.
The receiver knows the prior distribution and when receiving $Y$, it estimates the original value as $\widehat {X}$.
If we aim to minimize the expected error $||X-\widehat {X}||$ (say, for an $L_2$ norm), what'd be the right encoding and decoding schemes?
A simple solution would be to encode each coordinate of $X$ using $k$ bits as $Y=(X_1\cdot 2^k, X_2\cdot 2^k)$ (rounding deterministically or randomly) and estimating $\widehat X=(2^k/Y_1, 2^k/ Y_2)$.
However, some simulations suggest that by switching to a polar representation (i.e., sending the approximated polar representation and decoding it accordingly) we can get a lower expected error (in terms of both $L_1$ and $L_2$ norms).
What is the right generalization to $[0,1]^d$ variables for a larger $d$?