Let $\gamma > 0$ and $x$, $w$, and $z$ $\in \mathbb R^n$. Compute the following function:
$$ f(x) = \sup_{z}\left[ \inf_{w} \left( \| w\|_0 + \frac{\gamma}{2}\| w- z\|^2_2 \right) - \frac{\gamma}{2}\| x - z\|^2_2 \right] $$ where $\| \cdot \|_0$ is the ¨norm¨, which is equal to the number of elements that are not zero in the input vector.
So far, my idea is to compute the result for every component $x_i$ of the vector $x$ to just analyze the 1-D case. I have the following. Defining: $$ g_i(z_i) = \inf_{w_i} \left( \| w_i\|_0 + \frac{\gamma}{2}|w_i-z_i|^2 \right) $$ and noting that $\| w_i\|_0 = 0 \iff w_i = 0$ and $\| w_i\|_0 = 1 \iff w_i \neq 0$ I get $$ g_i(z_i)= \begin{cases} 1, & |z_i| \geq \sqrt{\frac{2}{\gamma}} \\ \frac{\gamma}{2}|z|^2, & |z_i| < \sqrt{\frac{2}{\gamma}} \end{cases} $$ and from here it should be easy to see that $$ f_i(x_i)= \begin{cases} 1, & |x_i| \leq \sqrt{\frac{2}{\gamma}} \\ \frac{\gamma}{2}|x_i|^2, & |x_i| > \sqrt{\frac{2}{\gamma}} \end{cases} $$ So the result should be $f(x) = [f_1(x_1), f_2(x_2), \dots, f_n(x_n))]^T$
Is this correct?
Not completely, but you are on the right track.
The $1$-D analysis and calculation of $g_i(z_i)$ is correct. But your $f_i$ results are incorrect, mostly because you give no justification for it at all and you seem to assume that to achieve the supremum we have to have $x=z$ so that $\frac{\gamma}{2}\| w- z\|^2_2$ is minimized.
Finally, note that $f(x)$ is a real valued function, so the result has to be one real number, not a vector!
Guideline for a solution:
The key in getting the general $n$-dimensional problem solved is that it can indeed be easily turned into $n$ 1-dimensional problems that are easy to solve! For example, the term $\left( \| w\|_0 + \frac{\gamma}{2}\| w- z\|^2_2 \right)$, from which we have to find the infimum, is
$$\| w\|_0 + \frac{\gamma}{2}\| w- z\|^2_2 = \sum_{i=1}^n\|w_i\|_0 + \frac{\gamma}{2} \sum_{i=1}^n(w_i-z_i)^2 = \sum_{i=1}^n \left(\|w_i\|_0 + \frac{\gamma}{2}(w_i-z_i)^2\right),$$
so a sum of $n$ terms where each only depends on one (different) component of the vector $w$ to take the infimum over, so each summand can be minimized independendly from the others. That is the work you did, and we get
$$\inf_w \left(\| w\|_0 + \frac{\gamma}{2}\| w- z\|^2_2\right) = \sum_{i=1}^n g_i(z_i).$$
Now we continue in the same way to simplify the supremum! Again we find that the given formula means finding the $n$-dimensional supremum is can be split into finding $n$ 1-dimensional suprema:
$$f(x) = \sup_{z}\left[ \inf_{w} \left( \| w\|_0 + \frac{\gamma}{2}\| w- z\|^2_2 \right) - \frac{\gamma}{2}\| x - z\|^2_2 \right] = \sup_z\left[\sum_{i=1}^n g_i(z_i) - \frac{\gamma}{2}\| x - z\|^2_2 \right] = \sup_z\left[\sum_{i=1}^n g_i(z_i) - \frac{\gamma}{2}\sum_{i=1}^n (x_i-z_i)^2 \right] = \sup_z\sum_{i=1}^n \left[g_i(z_i) - \frac{\gamma}{2}(x_i-z_i)^2\right].$$
Again the sum under the supremum has each summand only depend on one component of the vector $z$, so each summand can be maximised independently.
EDIT/ADDED:
If we define
$$f_i(x_i)=\sup_{z_i} \left[g_i(z_i) - \frac{\gamma}{2}(x_i-z_i)^2\right],$$
we get $f(x) = \sum_{i=1}^n f_i(x_i)$ and we have now $n$ 1-dimensional optimisation problems to solve.
We have $g_i(z_i) \le 1$ and $-\frac\gamma2(x_i-z_i)^2 \le 0$ which implies $g_i(z_i) - \frac\gamma2(x_i-z_i)^2 \le 1$, so $f_i(x_i) \le 1$. For $|x_i| \ge \sqrt{\frac2\gamma}$ we actually have equality, as we can choose $z_i=x_i$ and have equality in both starting inequalities!
For $|x_i| < \sqrt{\frac2\gamma}$, the first insight is that we can restrict our search for the maximising $z_i$ to the interval $[-\sqrt{\frac2\gamma}, \sqrt{\frac2\gamma}]$.
That's because for $z_i > \sqrt{\frac2\gamma}$ we have $g_i(z_i)=g_i(\sqrt{\frac2\gamma})=1$ and $-\frac\gamma2(x_i-z_i)^2 < -\frac\gamma2\left(x_i-\sqrt{\frac2\gamma}\right)^2$. The latter is because the function $-\frac\gamma2(x_i-z_i)^2$, when considered as a function in $z_i$, with $x_i$ being a fixed parameter, is parabola opened downward with its maximal point at $(x_i,0)$, so it's decreasing after $x_i$ and hence also decreasing after $\sqrt{\frac2\gamma}$. For $z_i < -\sqrt{\frac2\gamma}$ this works out the same way.
Since we now know that $|z_i| \le \sqrt{\frac2\gamma}$, we can substitute the correct branch of the formula for $g_i(z_i)$ into the term we need to maximise:
$$g(z_i) - \frac\gamma2(x_i-z_i)^2 =\frac\gamma2z_i^2 - \frac\gamma2(x_i-z_i)^2 = -\frac\gamma2x_i^2 + \gamma x_i z_i.$$
This is a linear function in $z_i$, with coefficient $\gamma x_i$ before $z_i$. So it's maximal value is taken for the maximal possible $z_i$ if $x_i > 0$, for the minimal possible value $z_i$ for $x_i < 0$ and if $x_i=0$, then any value for $z_i$ works.
So we get that for $|x_i| < \sqrt{\frac2\gamma}$ we have
$$ f(x_i) = \begin{cases} -\frac\gamma2 x_i^2+\gamma x_i\sqrt{\frac2\gamma} & \text{, if } x_i \ge 0 \\ -\frac\gamma2 x_i^2-\gamma x_i\sqrt{\frac2\gamma} & \text{, if } x_i < 0 \\ \end{cases} $$
and using the absolute value of $x_i$ we can describe this in one formula: $-\frac\gamma2 x_i^2+\gamma |x_i|\sqrt{\frac2\gamma}$.
So we can combine the cases $|x_i| < \sqrt{\frac2\gamma}$ and $|x_i| \ge \sqrt{\frac2\gamma}$ to get out final formula for $f_i(x_i)$:
$$ f(x_i) = \begin{cases} 1 & \text{, if } |x_i| \ge \sqrt{\frac2\gamma} \\ -\frac\gamma2 x_i^2+\gamma |x_i|\sqrt{\frac2\gamma} & \text{, if } |x_i| < \sqrt{\frac2\gamma} \\ \end{cases} $$
This solves the problem, using the previously established
$$f(x)=\sum_{i=1}^n f_i(x_i).$$