See the figure for reference:

$f_i$ and $f_{i+1}$ are unobserved times of events, while $y_i$ and $y_{i+1}$ are observed times of subsequent events. We'd like to recover the distribution of $f_{i+1} - f_i \sim G(\tau)$, but we've only observed $y_{i+1} - y_i \sim S(\tau)$, and $y_i - f_i \sim H(\tau)$. Notably, we have parametric definitions of $S(\tau)$ and $H(\tau)$, not actual observations of $y_i$, $f_i$, etc.
Is this the correct way to recover $G(\tau)$ ?
\begin{aligned} y_{i+1} - y_i &\sim \big(f_{i+1} + H(\tau)\big) - \big(f_i + H(\tau)\big)\\ &\sim \big(f_i + G(\tau) + H(\tau)\big) - \big(f_i + H(\tau)\big)\\ &\sim G(\tau) + H(\tau) - H(\tau) \end{aligned} Since the PDF of a sum of random variables is the convolution of their PDFs (more info), then: \begin{aligned} S(\tau) &= G(\tau) * H(\tau) * H(-\tau)\\ G(\tau) &= \big( S(\tau) *^{-1} H(\tau) \big) *^{-1} H(-\tau) \end{aligned} where $*$ is convolution, and $*^{-1}$ is deconvolution. I don't need an exact solution, so kindly disregard any required deconvolution sorcery.
In particular, I negated $\tau$ in the 2nd $H$ by intuition, but is that correct?
Thanks.
$\def\d{\mathrm{d}}\def\R{\mathbb{R}}$For any random variable $X$ with a p.d.f. $f_X$, since$$ P(-X \leqslant a) = P(X \geqslant -a) = \int_{-a}^{+∞} f_X(x)\,\d x = \int_{-∞}^a f_X(-y) \,\d y,\quad\forall a \in \R $$ then $f_{-X}(a) = f_X(-a)$ for $a \in \R$.
Now, note that$$ y_{i + 1} - y_i = (f_{i + 1} - f_i) + (y_{i + 1} - f_{i + 1}) - (y_i - f_i). $$ If $f_{i + 1} - f_i, y_{i + 1} - f_{i + 1}, y_i - f_i$ are independent, then\begin{align*} S(a) &= f_{y_{i + 1} - y_i}(a) = f_{f_{i + 1} - f_i}(a) * f_{y_{i + 1} - f_{i + 1}}(a) * f_{-(y_i - f_i)}(a)\\ &= f_{f_{i + 1} - f_i}(a) * f_{y_{i + 1} - f_{i + 1}}(a) * f_{y_i - f_i}(-a)\\ &= G(a) * H(a) * H(-a), \end{align*} and$$ G(a) = (S(a) *^{-1} H(-a)) *^{-1} H(a) = (S(a) *^{-1} H(a)) *^{-1} H(-a). $$