Computationally efficient method for determining the complex norm of a vector times a diagonal matrix

36 Views Asked by At

I am trying to reproduce the algorithm from the paper: Kunis, S., & Potts, D. (2007). Stability results for scattered data interpolation by trigonometric polynomials. SIAM Journal on Scientific Computing, 29(4), 1403-1419. Wherein a series of trigonometric polynomials are used to reconstruct an approximation of irregularly sampled data using Non-Uniform Discrete Fourier Transforms.

The algorithm is based on the Conjugate Gradient method (p. 5, Algorithm 1: CGNE), and uses the following relationship to estimate the step size at each iteration:

$$ \alpha_l = \frac{r_l^Hr_l}{p_l^H\hat{W}p_l} $$

where $r_l$ are the $M \times 1$ residuals in the time domain in the 1-dimensional case for iteration $l$, $\alpha_l$ is the calculated step size, $p_l$ is the $N \times 1$ predicted step direction for the frequencies at iteration $l$ and $\hat{W}$ are the weights - calculated based on a-priori knowledge of the smooth decay of the fourier coefficients via a kernel such as the Fejer kernel. $H$ denotes the hermitian transpose.

Knowing that for some complex vector, $x$ the square of the complex norm $x^Hx$ can be calculated efficiently as $abs(x) ** 2$, since each element of $x$ is being multiplied by itself - what would a similarly efficient way of calculating $p_l^H\hat{W}p_l$ look like?