Let $X,Y$ and $Z$ be Borel spaces (that is, Borel subsets of Polish spaces) and let $\mathcal P(X)$ denote the Borel space of all Borel probability measures on $X$. For a product measure $P\in \mathcal P(X\times Y\times Z)$ let $\kappa:X\times Y\to\mathcal P(Z)$ be a regular conditional probability on $Z$ given $X\times Y$. We say that the measure $P$ is good if there exists a version of $\kappa$ that does not depend on $X$. Is that true that the set of all good measures if a Borel subset of $\mathcal P(X\times Y\times Z)$?
Edit: previous solution attempt is deleted as it appeared to be incorrect. At the same time, I hope a similar idea would still work. That is, we can find a countable set of conditions on $P$, each being measurable, so that $P$ is good iff it satisfies all these conditions. For example, maybe we can construct a countable system of function $f_i$ (e.g. independent of $x\in X$) such that some condition with $\int f_i \mathrm dP$ is the desired one.
This is based on a previous version of the question.
Let $P$ be a measure on $X\times Y\times Z$ satisfying (1). We want to show that $P$ is good.
Let ${\bf 1}$ be a probability space with a single element. We can identify a measure with a kernel with domain ${\bf 1}$. We can also identify measurable functions with Dirac-measure-valued kernels and we can construct kernels $(\kappa_i)_i$ as pointwise stochastically independent products of kernels $\kappa_i$. Also, we can compose kernels in a natural way. Most of these things are formulated in the theory of the category of probabilistic mappings.
In particular, since $P$ satisfies (1). we can treat the marginals $P_{XZ}$ and $P_Y$ as kernels and have $P=(P_{XZ},P_Y)$. Let $C:X\to\Delta(Y)$ be a regular conditional probability of (1). One can then rewrite the kernel $P_{XZ}$. There is a kernel $I$ from $X$ to $\Delta(X\times X)$ given by $I(x)=(x,x)$ and then a kernel $(I_X,C)$ from $X\times X$ to $\Delta(X\times Z)$ with $I_X$ the identity on $X$. So $$P_{XY}=(I_X,C)\circ I\circ P_X.$$ We can then write $\kappa$ as $$\kappa=\pi_Z\circ (I_X,C)\circ I,$$ which is clearly independent of $Y$. I'm pretty sure this form of reasoning by abstract nonsense is correct, but it might be hard to translate it in a conventional proof.
But a good measure may not give rise to the marginals $P_{XZ}$ and $P_Y$ being independent. A simple example is the uniform distribution on the diagonal of the unit cube $D=\{(x,x,x):x\in[0,1]\}$. One can write the regular conditional probability as a function of only the first coordinate, but the measures are not independent.