Question
Show that there exists a set of unitary matrices $\{U_i\}$, and probability $\{p_i\}$, such that for any $n \times n$ matrix $A$ \begin{equation} \tag{1} \sum_{i} p_i U_i A U^{\dagger}_i = \text{tr}(A) \frac{I}{n} \end{equation}
Attempts
For $n=2$, it is easy to show \begin{equation} \frac{1}{4} ( \sigma^x A \sigma^x + \sigma^y A \sigma^y + \sigma^z A \sigma^z + I A I ) = \text{tr}(A) I / 2 \end{equation} where $\sigma^{x,y,z}$ are Pauli sigma matrices. The idea comes from kraus operator sum representation.
We can then generalize to dimension $n = 2^m$, where $U_i$ can be taken as the tensor products of these basis, but not arbitrary dimension.
In indices, Eq.(1) is equivalent to \begin{equation} \sum_i p_i (U_i)_{ab} (U_i^*)_{dc} = \delta_{bc} \delta_{ad} / n \end{equation} This looks like the identity from the finite dimensional irreducible unitary representation of finite group, see Peter-Weyl theorem. But again this only works when group $G$ has irreducible representation at dimension $n$, and all $p_i$ in this case are equal.
I feel that "right proof" should not utilize these additional structures.
An attempted proof of existence that doesn't actually construct the spanning set and distribution.
First, we note that the set of unitary matrices spans $\Bbb C^{n \times n}$; we could prove this nicely using polar decomposition. From there, we note that there must exist a basis of $\Bbb C^{n \times n}$ $\{U_1,U_2,\dots,U_{n^2}\}$ consisting of unitary matrices.
It follows that the vectors $\operatorname{vec}(U_1),\dots,\operatorname{vec}(U_{n^2})$ span $\Bbb C^{n^2}$.
The argument below is incorrect
(Thus, there necessarily exist (positive) $p_k$ such that $$ \frac 1n I_{n^2} = \sum_{i} p_i \operatorname{vec}(U_i)\operatorname{vec}(U_i)^\dagger $$ We correspondingly find that these $U_i$ satisfy $\sum_{i} p_i U_iA U_i^\dagger = \frac 1n \operatorname{tr}(A) I$, as desired.)
Some clarification:
First of all, the linear span bit. Let $\langle \cdot, \cdot \rangle$ denote the Frobenius (Hilbert-Schmidt) inner product. Suppose that $A$ lies in the orthogonal complement of the span of the unitary matrices. Let $A = UP$ be a polar decomposition. Then we have $$ 0 = \langle U, A \rangle = \operatorname{trace}(U^\dagger A) = \operatorname{trace}(U^\dagger UP) = \operatorname{trace}(P) $$ but $P$ is positive semidefinite, so $\operatorname{trace}(P) = 0$ implies that $P = 0$. Thus, $A$ must be zero.
So, the span of the unitary matrices is all $\Bbb C^{n \times n}$.
Another obsrevation:
Let $\mathcal C_U$ denote the convex cone generated by the set $\{uu^* : u = \operatorname{vec}(U) \text{ for some unitary } U \}$. Showing that $\sum_{i} p_i \operatorname{vec}(U_i)\operatorname{vec}(U_i) = I$ can be achieved with non-negative coefficients $p_i$ means that we're trying to show that $I \in \mathcal C_U$.
One orthogonal basis for $\Bbb C^{n \times n}$ consisting of unitary matrices is as follows: let $$ X = \pmatrix{0&&&&1\\1&0\\&1&0\\&&\ddots\\&&&1}, Z = \pmatrix{1\\ & \omega \\ && \ddots \\ &&& \omega^{n-1}} $$ Then the matrices $\{Z^j X^k : 0 \leq j,k \leq n-1\}$ form our orthogonal basis.