Justifying taking Fourier transform of derivatives

68 Views Asked by At

I am prearing for my final exams and came across this question in the archives. Here is a link for the same to highlight that this is a past exam problem and not a HW.


enter image description here


Attempt

Taking the fourier transform of $$- \Delta u+ u=f$$ yields $$4 \pi^2 |z|^2\hat{u}(z)+\hat{u}(z)=\hat{f}(z)$$ thus giving us $(a)$. But how do I justify taking the Fourier transform of derivatives?

My reference book is Folland. Theorem $8.22(e)$ says that for $f \in L^1$,

enter image description here

Clearly the hypthesis in $(e)$ are not met. How do I proceed from here?