The one-dimensional Fourier transform has a famous relationship with the derivative of a function
$$\mathcal{F}\left\{\frac{d f}{d t}\right\} = iw\mathcal{F}\left\{f\right\}$$
Among other things this property is what makes the Fourier transform and it's numerical algorithms so attractive as a tool when posing differential equations in various fields of science and engineering. Effectively turning a linear differential equation of some order into a polynomial in the frequency variable.
Can we derive a similar result for the multidimensional Fourier transform working on a scalar function in a multidimensional variable space?
Own work : What if we have a scalar valued function ${\bf x} \to f,$ $\mathbb R^n \to \mathbb R$. If sufficiently smooth it will have a gradient $(\nabla f)(\mathbf x) \in \mathbb R^n$. Each component being the spatial partial derivative.
By separation of variables I suppose that the partial differentiation that happens in the gradient will inherit the property of the single-dimensional case. In other words :
Using the somewhat sloppy notation from calculus $f'_{x_k} = \frac{d f}{ d x_k}$ :
$\mathcal F\{f'_{x_k}\} = iw_k \mathcal F\{f\}$ Where $w_k$ is the frequency variable in the same dimension as $x_k$
Is this reasoning sound? Are there some special requirements to be careful about as to when this holds and when it does not?
If it is of any interest I am looking at inverse gradient problems where sparse gradient fields can be isolated and it would be beneficial to have some systematic way to handle the residual somehow.
Here are some experiments of mine. Original 2D function. A disc with value 1 inside and 0 outside.
- Calculated gradient of disc and nulled it for $x<x_0$.
- Calculated FFT of cropped gradient along x dimension.
- Divided by $wi$.
- Set the 0-frequency component to 0 as it is undefined after division.
- Inverse FFT.
- Adjust with the known DC component of the original image's FFT.
We can see that we do get rid of the left edge of the disc as we should.
Also as we in this experiment only take into consideration the x component of the gradient we do introduce new y-gradients which should not be there. The real challenge I suppose would lie in solving the vector equations simultaneously so that both x and y partial differentials are taken into account.
Remembering the circular convolutional property of the FFT we need to compensate at the edge for any non-0 integral of the partial differential or the algorithm will assume connectivity between left and right edge as can be seen quite clearly. If such compensation is made and we renormalize we can get a more intuitive solution :



