I am looking for a generalization of the result which states that the density of the sum of two random variables is the convolution of their densities.
Specifically, if I have $Z=f(X,Y)$, where $p_{X,Y}(x,y)=p_X(x)p_Y(y)$, then how does one talk about $p_Z(z)$?
For example, consider the model of a pendulum where one is trying to quantify the uncertainty in period $T=2\pi\sqrt{L/G}$. One could model $L\sim \text{lognormal }(l|l_0,\sigma_l)$ and $G\sim \text{lognormal }(g|g_0,\sigma_g)$. What can be said about the period?
Since this mapping isn't one-to-one, the usual Jacobian rule for transformation of densities does not apply.
In some generality try to extend your mapping $T$, such that it becomes invertible. In your example one choice would be
$$ T(L,G) = (2\pi \sqrt{L/G}, G). $$
Now the mapping is invertible with
$$ T^{-1}(T_1, T_2) = \left( T_2\left(\frac{T_1}{2\pi}\right)^2,T_2 \right). $$
Next you find the joint density of $(T_1,T_2)$ and marginalize out $T_2$. In some cases it is possible to choose the extension of $T$ such that $T_1$ and $T_2$ are independent and thus making the marginalization problem much easier.
In somewhat less generality just find the density of $\log T$
$$ \log T = \log 2\pi+\frac{1}{2}\left( \log L - \log G\right) \sim \mathcal{N}\left(\log 2\pi+\frac{1}{2}(l_0+g_0), \frac{\sigma_l^2+\sigma_g^2}{4}\right). $$