A family $\mathcal{F}$ of densities distributions and a family $\mathcal{P}$ of prior distributions are said conjugates if $\forall f \in \mathcal{F}, \pi(\theta) \in \mathcal{P},$ $\pi(\theta\mid x) \in \mathcal{P}$
In practice, to determine if two families are conjugates books just check if $f \pi \propto \Pi$ with $\Pi \in \mathcal{P}$. This happens for instance for the gamma and poisson distributions.
Why is it enough to check it up to a constant?
Edit
Perhaps, I should have given the whole explicit example beofre.
If $X_i \sim \mathcal{P}(\theta)$ one can show that
$$h(x,\theta) = f(x|\theta) \pi(\theta) = \frac{e^{-\theta (n+1/ \beta)}\theta^{(n\overline{x}+\alpha -1)}I_{(0,\infty)}(\theta)}{\Gamma(\alpha)\beta^\alpha \prod x_i!}$$ which is like a $Gamma(n\overline{x}+\alpha,(n+1/\beta)^{-1})$, up to a constant because $\prod x_i!$ does not appear in the expression of the gamma.
Then I think that one can say that $m(x) = h(x,\theta)/\pi(\theta|x)$ so that the marginal $m(x)$ compensates the assumption that $\pi(\theta|x)$ is a gamma distribution. However, is this not limiting on the form of the gamma?
Solution
This is more silly than I thought. The situation (for instance for the example above) is that I get $f(x|\theta)\pi(\theta) = B Gamma$. Then I ask why the marginal normalizes this and the answer is straight forward, because $\int f(x|\theta)\pi(\theta) = \int B Gamma = B \int Gamma = B$
Remember how posterior distributions are found: Multiply the prior by the likelihood and then normalized. If you show that the posterior after normalization will be in the proposed family, then there's no need to do the actual normalization if that's all you're trying to show.