I know that the following theorem holds
Fourier's Integral Theorem. Suppose $f:\mathbb{R}\to\mathbb{R}$ is piece-wise continuous on $\mathbb{R}$ and at every point $x$ the right and left derivatives exist. Also, assume that $f$ satisfies $\int_{-\infty}^{+\infty}|f(t)|dt<+\infty$. Then the following holds \begin{align*} \frac{1}{2}\big(f(t^+)+f(t^-)\big)&=\int_{0}^{+\infty}(A(\omega)\cos(\omega t)+B(\omega)\sin(\omega t))dt \\ A(\omega)&=\frac{1}{\pi}\int_{-\infty}^{+\infty}f(t)\cos(\omega t)dt \\ B(\omega)&=\frac{1}{\pi}\int_{-\infty}^{+\infty}f(t)\sin(\omega t)dt \end{align*}
Assume that $H(t)$ is the usual Heaviside or unit-step function. I saw in some text that the Fourier integral representation of
$$f(t):=H(t)-\frac{1}{2}= \begin{cases} +\frac{1}{2},\quad t\gt 0 \\ -\frac{1}{2},\quad t\lt 0 \end{cases}$$
can be given by
$$f(t)=\frac{1}{\pi}\int_{0}^{+\infty} \frac{\sin \omega t}{\omega} d\omega \tag{1}$$
however, the function $f(t)$ does not satisfy the condition $\int_{-\infty}^{+\infty}|f(t)|dt<+\infty$ of the Fourier integral theorem and when one tries to compute its Fourier integral representation will see that $A(\omega)=0$ but $B(\omega)=\lim_{R\to+\infty}\frac{1}{\pi\omega}\sin\omega R$ which does not exist! So, here are my questions
How can I derive $(1)$? I heard that this can be derived by using the generalized functions but I have no idea what this really means. Can someone explain the main idea of this for me? Please put it simple for someone with little knowledge of elementary analysis! :)
$\newcommand{\inner}[1]{\langle #1 \rangle}$
You have to view (1) in terms of Schwartz's distribution theory, c.f. http://web.abo.fi/fak/mnf/mate/kurser/fourieranalys/chap3.pdf for a brief overview. Key points needed here are:
1) The Dirac delta distribution $\delta$ (usually called a function, but it is not one) is a distribution such that for all test functions $\phi\in \mathcal{S}$ (the space of Schwartz function) we have $$\inner{\delta, \phi}=\int_\mathbb{R} \delta(t)\phi(t)dt=\phi(0).$$
2) We say a tempered distribution $F$ is induced by a function $f$ (which has to satisfy some suitability conditions conditions that we won't get into here) when we have for all test functions $\phi$: $$F(\phi)=\inner{f, \phi} := \int_\mathbb{R} f(t) \phi(t)dt.$$
3) We defined the derivative of $F$ via the following: $$F'(\phi):=\inner{f',\phi}=-\inner{f, \phi'}.$$ The second equality can be derived using integration by parts.
4) The Fourier transform of a distribution $F$ is defined by $$\widehat{F}(\phi)=\inner{\widehat{f}, \phi}=\inner{f, \widehat{\phi}}.$$
5) The Fourier transform derivative formulas hold.
Viewing the Heaviside function as a distribution, we have that $$\inner{H', \phi} = -\inner{H,\phi'} = -\int_\mathbb{R} H(t)\phi'(t)dt = -\int_0^\infty \phi'(t)dt = \phi(0) = \inner{\delta, \phi}.$$ Hence, the delta distribution is the derivative of the Heaviside function. More importantly in this context, it is then the derivative of the Heaviside function minus a half.
If we compute the Fourier transform of $\delta$ (using my favorite convention, and I would argue the correct convention) we have $$\widehat{\delta}(\gamma)=\int_\mathbb{R} \delta(t)e^{-2\pi i \gamma t}dt=\inner{\delta, e^{-2\pi i\gamma t}}=e^{-2\pi i \gamma 0}=1.$$ Using the Fourier derivative formulas for this convention, we then have $$(H-1/2)\widehat{}(\gamma) = ((H-1/2)')\widehat{}\frac{1}{2\pi i \gamma} = \widehat{H'}(\gamma)\frac{1}{2\pi i \gamma}=\frac{\widehat{\delta}}{2\pi i \gamma}=\frac{1}{2\pi i \gamma}.$$
Applying the Fourier inversion formula leaves us with $$H(t)-1/2 = \int_\mathbb{R} \frac{e^{2\pi i \gamma t}}{2\pi i \gamma} d\gamma=\int_0^\infty \frac{e^{2\pi i \gamma t}-e^{-2\pi i \gamma t}}{2\pi i \gamma} d\gamma = \int_0^\infty \frac{\sin(2\pi \gamma t)}{\pi \gamma} d\gamma.$$ Substituting $\omega=2\pi \gamma$ yields your formula (1). To actually compute the integral in (1) you can do this using contour integrals.