I am trying to learn about case functions, and how to represent certain expressions.
Say we are given a function and a domain on which it is valid. for example:
\begin{equation} f(x)=1/x, (x>0) \end{equation}
Furthermore, suppose we want to extend to the entire real line: $$f(x)= \begin{cases} -\dfrac{1}{x},& x<0;\\ 0,& x=0;\\ \dfrac{1}{x},& x>0;\\ \end{cases} $$ where we set the function equal to zero at $x=0$.
Now, it should be possible to sum multiple shifted versions of this. for example, suppose we had: $$f_1(x)=f(x-h_1)= \begin{cases} -\dfrac{1}{x-h_1},& x<h_1;\\ 0,& x=h_1;\\ \dfrac{1}{x-h_1},& x>h_1;\\ \end{cases} $$ $$f_2(x)=f(x-h_2)= \begin{cases} -\dfrac{1}{x-h_2},& x<h_2;\\ 0,& x=h_2;\\ \dfrac{1}{x-h_2},& x>h_2;\\ \end{cases} $$ Where $f_1$ is shifted by $h_1$ and $f_2$ is shifted by $h_2$. An expression such as $f(x-h_1)+f(x-h_2)$ should be valid on the entire real line.
I am having difficulty understanding how we can express a continuous version of this. for example if we wanted to integrate (edit: by this i mean continually sum-up the shifted versions of these functions over an interval, not integrate the function per-se) over all such $f(x-h)$'s where $h$ ran over a continuous interval, how would we express that integral?
EDIT: (this is edit/updated question is based off of discussion in the comments) if we define $$g(x)=\int_a^bf(x-h)dh$$ we may run into a problem because $f(x)$ is like $1/x$ and not Riemann integrable. is it then possible to make sense of the concept as a distribution/generalized function? so for a test function $\phi(x)$. $$g(x)=\langle \phi(x),\int f(x-h)dh\rangle $$
Again, the symbol $\int f(x-h)\,dh$ makes no sense without you being more careful. Perhaps what you're after is the (Cauchy) principal-value distribution: for any Schwartz function $\phi\in \mathcal{S}(\Bbb{R})$, we define \begin{align} \left\langle \text{p.v.}\left(\frac{1}{(\cdot)}\right), \phi\right\rangle:=\lim\limits_{\epsilon\to 0^+}\int_{|x|>\epsilon}\frac{\phi(x)}{x}\,dx. \end{align} It is of course tradition to denote the distribution as $\text{p.v.}\frac{1}{x}$... the letter $x$ there is just an arbitrary place-holder, which is why for the sake of definition, I used the $(\cdot)$ notation. Now, one can consider the convolution of this principal value distribution with any Schwartz-function $\phi$ to get the (smooth) function whose value at a point $y\in\Bbb{R}^n$ is \begin{align} \left[\left(\text{p.v.}\frac{1}{x} \right)*\phi \right](y)&:= \left\langle \text{p.v.}\left(\frac{1}{x}\right), \phi(y-\cdot)\right\rangle\\ &:=\lim_{\epsilon\to 0^+}\int_{|t|>\epsilon}\frac{\phi(y-t)}{t}\,dt\\ &=\lim_{\epsilon\to 0^+}\int_{|x-s|>\epsilon}\frac{\phi(s)}{x-s}\,ds. \end{align} (up to some constant factors of $i,\pi$ etc this is the Riesz transform of $\phi$). Note that no matter how much extra machinery we try to invoke, we still cannot escape the singularity of $\frac{1}{x}$ at the origin. That is why in everything above, to ensure everything is well-defined, we have to integrate an $\epsilon$-distance away from the singularity and then take the limit $\epsilon\to 0^+$.
More generally, if $\Omega\in L^1(S^{n-1})$ and $\int_{S^{n-1}}\Omega\,d\sigma=0$ (integral with respect to surface measure on the unit sphere in $n$-dimensions), then one can define the principal value distribution $\text{p.v.}\frac{\Omega(\hat{x})}{|x|^n}$, where $\hat{x}=\frac{x}{|x|}$ is the normalized vector, by defining for each Schwartz function $\phi\in \mathcal{S}(\Bbb{R}^n)$, \begin{align} \left\langle\text{p.v.}\frac{\Omega(\hat{x})}{|x|^n}, \phi\right\rangle&:= \lim_{\epsilon\to 0^+}\int_{|x|>\epsilon}\frac{\Omega(\hat{x})}{|x|^n}\phi(x)\,dx \end{align} One can verify this is actually continuous with respect to the topology on the space of Schwartz functions, so this defines a tempered distribution. Note that this generalizes what we talked about before, because when $n=1$, $S^{n-1}=S^0=\{-1,1\}$ consists of two points, and integrating over these points just means adding up the two function values; so in the above example, we had $\Omega(u)=\text{sgn}(u)$, i.e $\Omega(-1)=-1, \Omega(u)=1$, so it's just the identity function on $\{-1,1\}$, and thus for $x\neq 0$, we have $\frac{\Omega(\hat{x})}{|x|}=\frac{1}{x}$.
Now, we can once again consider the convolution of this distribution with a Schwartz function $\phi$ to get a smooth function \begin{align} \left[\left(\text{p.v.}\frac{\Omega(\hat{x})}{|x|^n} \right)*\phi \right](y)&:= \left\langle \text{p.v.}\left(\frac{\Omega(\hat{x})}{|x|^n}\right), \phi(y-\cdot)\right\rangle\\ &:=\lim_{\epsilon\to 0^+}\int_{|t|>\epsilon}\frac{\Omega(\hat{t})}{|t|^n}\phi(y-t)\,dt\\ &=\lim_{\epsilon\to 0^+}\int_{|y-s|>\epsilon}\frac{\Omega(\widehat{y-s})}{|y-s|^n}\phi(s)\,ds. \end{align} Of course, here the $dt,ds$ mean integrals with respect to the $n$-dimensional Lebesgue measure.