I'm working on an application of Bayesian inference. I'm familiar with Bayes' theorem expressed in terms of the probabilities of events, i.e. $$P(A|B) = \frac{P(B|A)P(A)}{P(B)} \propto P(B|A)P(A)$$ for some events $A$ and $B$. I also understand that in Bayesian inference, this is typically expressed in terms of probability distribution functions, i.e. for some unknown parameter $\theta$ and some data vector $x$, $$f(\theta|x) = \frac{f(x|\theta)f(\theta)}{f(x)} \propto f(x|\theta)f(\theta).$$
I can just use the formula above and do what I need to do with it, but I'd like to understand where it comes from, particularly since that will help me avoid mistakes in applying it. I assume that the latter equation is derived from the former (since the former is simple and can be readily derived from first principles), but I cannot for the life of me figure out how to go from one to the other. Google has been entirely unhelpful. Would someone be willing to explain where the latter equation comes from, or point me toward a resource?
Thank you to @lulu who commented on my question -- apparently I just needed to google the definition of conditional probability density functions. Per Wikipedia, conditional density is defined as $$f_{Y|X}(y|x) = \frac{f_{X,Y}(x, y)}{f_X(x)}$$ and since $f_{X,Y}(x, y) = f_{Y,X}(y, x)$, the latter equation pops right out as $$f_{Y|X}(y|x)f_X(x) = f_{X,Y}(x, y) = f_{X|Y}(x|y)f_Y(y).$$