This question asks about
\begin{equation} \begin{aligned} &\max_{\mathbf x \in \mathbb C^n} && \|\mathbf B\mathbf x\|_{\infty} \\ &\ \ \text{s.t.} && \|\mathbf x\|_2\leq 1 \end{aligned} \end{equation}
and I understand the answer. Is there a better way to compute it than exhaustively computing the $2$-norm of every row of $\mathbf{B}$? Especially the case where $\mathbf{B}$ is too large to form explicitly and $\mathbf{Bx}$ is implemented with a function. (It can also be assumed that $\mathbf{B}^\mathrm{H} \mathbf{y}$ is computable with a function.)
I believe the difficulty is that the cost function is differentiable but not smooth, so something along the lines of a projected gradient is not guaranteed to work. I believe that proximal gradient algorithms are used to minimize problems of this form. However, it is not clear to me how to apply that here where we are maximizing.
Better in what sense? From a computational complexity perspective, you cannot do better as that computation essentially is no more work than reading/looking at every element of $B$, which you would have to do regardless (unless you are talking massive size and/or randomized methods)