The four dimensional Fourier transform, at least in a physics context is usually (e.g. here) defined as follows. Given a function $G(\mathbf{x},t)$, then its Fourier transform is defined as
$$ g(\mathbf{k},\omega) = \int e^{-i(\mathbf{k}\cdot\mathbf{x}-\omega t)}G(\mathbf{x},t) \mathrm{d}^3x \,\mathrm{d}t $$
and the inverse Fourier transform as $$ G(\mathbf{x},t) = \int e^{i(\mathbf{k}\cdot\mathbf{x}-\omega t)}g(\mathbf{k},\omega) \mathrm{d}^3k \,\mathrm{d}\omega $$
I was wondering, is there a definite mathematical reason for the sign difference between $\mathbf{k}\cdot\mathbf{x}$ and $\omega t$ in the exponent? Could we not define it with both terms having the sign as well (I mean $\mathbf{k}\cdot\mathbf{x}+\omega t$)? It seems to me that the only reason why there is a sign difference is because one would like to expand in "plane waves" for physical reasons ($\mathbf{k}$ may be taken to be the actual wave vector of the plane wave and $\omega$ to be the actual frequency). Also another reason seems to be that the expression in the exponent $$ \mathbf{k}\cdot \mathbf{x} -\omega t = x_\mu k^\mu$$ is invariant under Lorentz transformations. Could someone confirm/refute my ideas and tell me whether it would be admissible to change the signs in the exponent in the definition of the Fourier transform?