How to numerically approximate multi-dimensional Hilbert transform?

231 Views Asked by At

One dimensional Hilbert transform $\mathcal{H}f(x) = \frac{1}{\pi} p.v. \int_{\mathbb{R}}\frac{f(y)}{x-y}dy$ could be efficiently approximated by Sinc functions if $f$ belongs to Wiener space of entire functions of exponential type, that is, $\mathcal{H}f(x) = \sum_{k=-\infty}^{\infty}f(kh)\frac{1-\cos(\pi(x-kh)/h)}{\pi(x-kh)/h}$. So for multidimensional Hilbert transform, say two-dimensional HT defined as $\mathcal{H}_{xy}(f(x,y))(u,v) = \frac{1}{\pi^2} p.v. \iint_{\mathbb{R}}\frac{f(x,y)}{(u-x)(v-y)}dxdy$, is there any efficient approximation for that? I don't know if normal numerical methods for Riemann integral could be applied on principal integral.

1

There are 1 best solutions below

4
On BEST ANSWER

Tensor products of functions

If $f_i$, $i\in\{1,2\}$ are functions of one variable, we define their tensor product to be the function of two variables
$$ (f_1\otimes f_2)(x_1,x_2)=f_1(x_1)f_2(x_2) $$ Don't be afraid of the word tensor, nothing difficult at all is going on here. You might even already know this operation from probability theory, where $f_1\otimes f_2$ would be the probability density function of two independent random variables with probability density functions $f_i$.

Tensor products of operators

If $A$ is a linear operator that maps functions of one variable to functions of one variable (as the univariate Hilbert transform does), then $A\otimes A$, the tensor product of A with itself, is defined by

$$ (A\otimes A)(f_1\otimes f_2)=Af_1\otimes Af_2. $$ This is not very helpful yet, as we have only defined $A\otimes A$ on tensor product functions, and it is not hard to come up with functions of two variables that cannot be written as tensor products. For example, $f(x_1,x_2):=x_1^2+x_2^2$ is not a tensor product. To fix this, we extend the definition of $A\otimes A$ by linearity. This means that we say we want $A\otimes A$ to be a linear operator, which implies

$$ (A\otimes A)(\sum_{j=1}^{N} f^{(j)}_{1}\otimes f^{(j)}_{2}):= \sum_{j=1}^{N} (A\otimes A)(f^{(j)}_{1}\otimes f^{(j)}_{2}). $$ Thus, we know how to apply $A\otimes A$ to any finite sum of tensor product functions. (Note that we now know how to apply $A\otimes A$ to $x_1^2+x_2^2$). Finally, we can use continuity arguments to extend this definition even to infinite sums of tensor product functions, and after we do so it is really hard to come up with any functions for which $A\otimes A$ is undefined (specifics depend on norms used etc. but are not essential).

Important note Assume there is another linear and continuous operator $B$ that agrees with $A\otimes A$ on all tensor product functions. Then you may easily check that $B$ must actually be equal to $A\otimes A$!

To approximate tensor products, you may tensorize approximations

Assume we have operators $A_{n}$ that converge to $A$ in the operator norm, $\|A-A_{n}\|\to 0$.

Claim: $A_{n}\otimes A_{n}\to A\otimes A$.

Proof: $$ \|A_{n}\otimes A_{n}-A\otimes A\|=\|(A_{n}-A)\otimes A + A\otimes (A_{n}-A)+(A_{n}-A)\otimes (A_{n}-A)\|\\ \leq \|(A_{n}-A)\otimes A\|+ \|A\otimes (A_{n}-A)\|+\|(A_{n}-A)\otimes (A_{n}-A)\|\\ \leq \|A_{n}-A\|\|A\|+ \|A\|\|A_{n}-A\|+\|A_{n}-A\|\|A_{n}-A\|\to 0 $$ Remark You might have noticed that for the last inequality we used a property of tensor product operators that we haven't proved.

Two-dimensional Hilbert transform is tensor product of univariate transforms

I claim that the definition of the two-dimensional Hilbert transform that you used, let's name the operator $H_2$, equals the tensor product of the univariate Hilbert transform, $H_1\otimes H_1$. By the important note at the end of the second section, it suffices to verify this for tensor product functions. And indeed, we have (using formal calculations, since we weren't being rigorous about continuity anyway) $$ (H_2(f_1\otimes f_2))(x_1,x_2)=\int_{\mathbb{R}^2} \frac{f_1(y_1)f_2(y_2)}{(x_1-y_1)(x_2-y_2)} dy=(\int_{\mathbb{R}}\frac{f_1(y_1)}{x_1-y_1} dy_1)(\int_{\mathbb{R}}\frac{f_2(y_2)}{x_2-y_2}dy_2)=(H_1f_1)(x_1)(H_2f_2)(x_2)=((H_1\otimes H_2)(f_1\otimes f_2))(x_1,x_2). $$

Conclusion

You have an operator $H_2=H_1\otimes H_1$ and you have approximations $H_{1,K}\to H_{1}$. Hence, you may approximate $H_2$ by $H_{1,K}\otimes H_{1,K}$ ($K$ is where you cut off your infinite series at both ends, i.e. let's assume you uses the indices $|k|\leq K$).

Question What is $H_{1,K}\otimes H_{1,K}$?

Answer

$$ ((H_{1,K}\otimes H_{1,K})f)(x_1,x_2)=\sum_{|k_1|\leq K, |k_2|\leq K} f(k_1h,k_2h)\frac{1-\cos(\pi(x_1-k_1h)/h)}{\pi(x_1-k_1h)/h}\frac{1-\cos(\pi(x_2-k_2h)/h)}{\pi(x_2-k_2h)/h}. $$

Proof Again, you only need to verify that this formula is correct for tensor product functions. This is easy.

Remark Strictly speaking, the approximation formula you have given only converges if you let both $K$ and $h$ go to the limits $\infty$ and $0$, respectively. Accordingly, the two dimensional approximation formula could also simply be written as

$$ \sum_{(k_1,k_2)\in\mathbb{N}^2} f(k_1h_1,k_2h_2)\frac{1-\cos(\pi(x_1-k_1h)/h)}{\pi(x_1-k_1h_1)/h_1}\frac{1-\cos(\pi(x_2-k_2h_2)/h_2)}{\pi(x_2-k_2h_2)/h_2} $$

with the understanding that both the index set of the $(k_1,k_2)$'s has to be truncated appropriately, and both $h_1$, $h_2$ have to be appropriately small. To make these choices rigorously, you'd have to make assumptions on the decay properties of $f$ itself and of its Fourier transform