For regularization purposes in a convex optimization problem, I aim to compute a discretized numerical version of the $\mathcal{H}^{-k}$ norm, in a similar fashion to what is done in the following article.
Extracted from the article, the following propositions :
Given $f$, we can compute its $\mathcal{H}^{-k}$ norm as : $$\|f\|_{\mathcal{H}^{-k}} = \|(1+|\xi|^2)^{-k/2}|\hat{f}(\xi)|\|_{L^2}$$
Another definition is : $$\|f\|_{\mathcal{H}^{-k}} = \|u\|_{\mathcal H^{k}}$$ where $\mathcal{L}u = f$, with $\mathcal{L} = \sum_{r=0}^k (-1)^r\Delta^r$.
From that, \begin{align} \|f\|_{\mathcal{H}^{-k}} & = \|u\|_{\mathcal H^{k}} \\ & = \|\mathcal{L}^{-1}f\|_{\mathcal H^{k}} \end{align}
In a discretized space of function on $(0,1)$ with $N$ points, "step" $h= 1/(N-1)$, we can discretize our functions as $f_h = (f(k/n))_k \in \mathbb{R}^N$ and the Laplacian operator as the tridiagonal matrix : $$\Delta_h = \frac{1}{h^2}\begin{pmatrix} -1 & 1 & 0 & \cdots & 0 \\ 1 & -2 & 1 & \ddots & \vdots \\ 0 & 1 & -2 & \ddots & 0 \\ \vdots & \ddots & \ddots & \ddots & 1 \\ 0 & \cdots & 0 & 1 & -1 \\ \end{pmatrix}$$
The norm for cases $k = 1$ and $2$ is explicited (I'm not sure using what properties because I don't find the details in the article) as : $$\|f_h\|_{\mathcal{H}^{-k}} = \sqrt{h}\|(\mathcal{L}^{-1})^{1/2} f_h\|_{2}$$ (with abuse of notation since the norm isn't properly defined for vectors).
My question is how is this derivation done (I suspect the discretisation is only necessary for the inversion of the operator and not for derivation). Is there a relation like $\|\mathcal{L}^{-1}f\|_{\mathcal H^{k}} = \|(\mathcal{L}^{-1})^{1/2}f\|_{L^2}$ that can be obtained ?
Also, are there requirements on the functions that follow this relation ? I'm struggling to get the same results numerically between the Fourier method and the above discretisation method.
Thanks.