Consider a coin. The coin has a probability $Q$ of showing us a heads when tossed. The coin is tossed $N$ times and results in $H$ heads. First, what is the distribution of $H$ given $N$ and $Q$. This is just the Binomial distribution:
$$P(H | Q, N) = \frac{N!}{H!(N-H)!}Q^H(1-Q)^{N-H} \tag{1}$$
Next, what is the distribution of $Q$ given $N$ and $H$? This is Beta and we have (replacing the Gamma functions in the Beta function with factorials since $N$ and $H$ are integers here):
$$P(Q | N, H) = \frac{N!}{H!(N-H)!}Q^H(1-Q)^{N-H} \tag{2}$$
And we get the exact same expression as in equation (1). We can mechanically prove this with Bayes theorem, sure (I don't know how to do even this).
But the two expressions are completely different. The first is a probability mass function and the second one is a probability density function. It is very strange that they look the same. Is this suggestive of some general result for distributions when we consider various parameters fixed and try to get the distributions of others? Or is it just a fluke occurrence in this particular case with the Binomial and Beta?
This is close to a classic example of what is called conjugacy in Bayesian statistics. In your case, you did not give a distribution for the random variable $Q \in (0, 1)$; that would be called a prior. In fact, it is customary to give $Q$ a beta prior i.e. suppose $Q \sim \text{Beta}(\alpha, \beta)$ for known $\alpha, \beta > 0$. Now suppose $N \in \mathbb Z_+$ and $H \in \mathbb N$ is a random variable such that $$ H \ | \ Q = q \sim \text{Binomial}(N, q) $$ Then something nice happens when you try to calculate $Q \ | \ H = h$; for $q \in (0, 1)$ \begin{align*} p_{Q}(q \ | \ h) &= \frac{p_Q(q)p_H(h \ | \ q)}{p_H(h)}\\ &\propto_q p_Q(q)p_H(h \ | \ q) \\ &\propto_q q^{\alpha - 1}(1 - q)^{\beta - 1} q^h (1 - q)^{N - h} \\ &= q^{(\alpha + h) - 1}(1 -q)^{(\beta + N - h) - 1} \end{align*} and this is the kernel of a $\text{Beta}(\alpha + h, \beta + N - h)$. So $[Q \ | \ H = h] \sim \text{Beta}(\alpha + h, \beta + N - h)$. Note that in each step of the derivation, I ignored constants that do not contain $q$. You don't always need the exact constants to recognize the form (aka kernel) of a distribution.
In other words starting with a beta prior on $Q$ with binomial data, you have ended up with a beta posterior on $Q$. In the special case where $\alpha = \beta = 1$, you have the uniform prior on $Q$.