Fix $t>0$ and consider the following Principal Value integral: $$ \mathscr{P}\int_{-\infty}^{\infty}d\omega \frac{e^{-i \omega t}}{\sqrt{\omega^2}} = - \log(t^2) $$
This is the function that Mathematica spits out (and this also matches a result that I'm finding in Lighthill's ``An Introduction to Fourier Analysis and Generalized Functions'', up to a constant).
I'm posting because when I check this calculation myself explicitly, I'm getting something strange. This is my method of computation: $$ \mathscr{P} \int_{-\infty}^{\infty} \frac{d\omega}{\sqrt{\omega^2}} e^{ - i \omega t } = \lim\limits_{\eta \to 0^{+}} \left\{ \int_{-\infty}^{-\eta} \frac{d\omega}{-\omega} e^{ - i \omega t } \ + \ \int_{\eta}^{\infty} \frac{d\omega}{\omega} e^{ - i \omega t } \right\} $$
After switching the variable $\omega \mapsto -\omega$ in the first integral, the above simplifies to: $$ \ldots = \lim\limits_{\eta \to 0^{+}} \left\{ 2 \int_{\eta}^{\infty} \frac{d\omega}{\omega} \cos\left( \omega t \right) \right\} = \lim\limits_{\eta \to 0^{+}} \left\{ - 2 \mathrm{Ci}\left( \eta t \right) \right\} $$
Where $\mathrm{Ci}$ is the cosine-integral function. The problem is that I cannot take the limit $\eta \to 0^{+}$. Upon an expansion about $\eta =0$, I find that the above looks like: $$ \ldots = \lim\limits_{\eta \to 0^{+}} \left\{ - 2 \gamma - \log(\eta^2) - \log(t^2) + \mathscr{O}(\eta^2) \right\} $$
So it seems to me like the integral has the value $- 2\gamma - \lim\limits_{\eta\to 0^+} \log(\eta^2) - \log(t^2)$. So I get the right functional form in $t$, but I have an extra two constant appearing - one of which is infinite!
The $\eta$ seems to be a 'regulator' for the normally divergent integral - how does one get rid of it? Am I using the wrong definition for the Cauchy Principal value in my computation? Mathematica and Lighthill seems to be throwing the $-2 \gamma - \log(\eta^2)$ away somehow.
P.S. $\gamma$ is the Euler-Mascheroni constant.
The Cauchy principal value integral does not exist, the limit of the integral over $[-\infty, -\eta] \cup [\eta, \infty]$ is $\infty$, which is hardly surprising, since we're dealing with $|w|^{-1}$.
Here a different regularization is needed, computing the finite part of the integral. Similar to how the principal value integral can be differentiated to get the Hadamard finite part, the principal value itself can be constructed by differentiating the integral of $\phi(t) \ln |t + x|$. In the same way, one can differentiate the integral of $\phi(t) \ln |t + x| \operatorname{sgn} (t + x)$ to get a regularization of the integral of $\phi(t) |t + x|^{-1}$.
This is done very naturally in terms of distributions, taking the distributional derivative of the regular functional $\ln |w| \operatorname{sgn} w$. The resulting distribution $|w|^{-1}$ can be written in this form and can be directly applied to $e^{-i t w}$, yielding $(|w|^{-1}, e^{-i t w}) = -2 \ln |t| - 2 \gamma.$
Note that Lighthill defines $|x|^{-1}$ as "any generalized function $f(x)$ such that $x f(x) = \operatorname{sgn} x$". Which I think completely defeats the purpose of using distributions: it's saying that the $\delta(x)$ term can be arbitrary, hence producing an arbitrary constant in the Fourier transform.
Lighthill shows that a linear change of variables introduces a $\delta(x)$ term. He concludes that "[t]he only satisfactory definition is one which admits the indeterminacy". That is certainly not true: one can choose a particular regularization ($|w|^{-1}$ above is the natural choice), and then, if changes of variables and the differentiation are carried out correctly, all the nice properties of the Fourier transform hold (scaling, differentiation, convolution, Plancherel's theorem for the inner product, not to mention uniqueness).