Attached is an extract from the book A First Course in Probability by Sheldon Ross. I am struggling to understand the flow of his logic at the place highlighted in red.
For example, let $n = 6$, $k = 4$. Then there are 4 possible events $A_i$ with $i=1,2,3,4$ provided we do not distinguish among non-special balls, i.e. their order does not matter. Also, in case of each event we end up with 4 balls. How does the author come to the conclusion that $P(A_1) = P(A_2) = P(A_3) = P(A_4) = 1/6$? I do not understand what he means by “since each one of the n balls is equally likely to be the $i$th ball chosen”.
Is there a way to come to this conclusion systematically? If so, how does it look like? Or could you help me please to understand how the author meant it? Thanks.
The probability of drawing any particular ball is the same as drawing another, as stated at the beginning of the exercise.
Call this probability $p$ and say there are $6$ balls in the urn. Since the probabilities are the same and we know all probabilities in the sample space must add to $1$ we have
$$p+p+p+p+p+p=1$$
$$6p=1$$
$$p=\frac16$$
This works for any number $n$ of balls in the urn:
$$\sum_{i=1}^np=1$$
$$np=1$$
$$p=\frac1n$$