Background
I'm trying to grasp exercise 1.1 from chapter 2 in Allan Guts "An intermediate course in pobability". In this question:
Start with a 2-dimensional discrete random variable $(X,Y)$
Define the conditional probability function $p_{Y|X=x}(y)=p_{X,Y}(x,y)/p_X(x)$
Use the definition of the marginal $p_X(x)=\sum_y p_{X,Y}(x,y)$
We are asked to verify that $p_{Y|X=x}(y)$ indeed "is a probability function of a true probability distribution."
In the presentation by Gut, there is no properties that qualifies such functions, he only gives definitions on what a probability function is, in relation to the underlying probability space. Naïvely I feel I must go back and specify from the probability space all the way up to the probability function.
Question
Inspired by some previous answers on math SE (but these are continous pdf's over $\mathbb R$), I'm inclined to only check that
- Total probability is 1, i.e. $\sum_{y}p_{Y|X=x}(y) = 1$
- The probability function is non-negative, i.e. $p_{Y|X=x}(y)\geq 0 , \forall y$.
Would these two conditions suffice in Alan Guts exercise above?
If they do - why dont we have to respecify the whole probability measure and so on?
If they don't - what is the proper criteria to check?
In this answer I will try to explain why it is enough to check 1) and 2).
Lemma:
If $S$ is a countable set and $p:S\to\mathbb R$ is a nonnegative function that satisfies $\sum_{s\in S}p(s)=1$ then the function $P:\wp(S)\to\mathbb R$ prescribed by: $$A\mapsto\sum_{s\in A}p(s)$$is a probability measure.
It is a good thing to (try to) prove that yourself, but if you have done that for once then you do not have to repeat it anymore for special cases.
That's why it is enough just to check 1) and 2). For the other conditions you just tacitly use the lemma.