I have 2 distributions $p_{0}$(as $H_{0}$) and $p_{1}$(as $H_{1}$) as the probabilities of $x$ in $X$.
$$\begin{array} {|c|c|c|c|c|c|c|c|} \hline x &0 &1 &2 &3 &4 &5 &6 \\ \hline p_{0} &0.3 &0.2 &0.1 &0.1 &0.1 &0.1 &0.1 \\ \hline p_{1} &0.1 &0.1 &0.1 &0.1 &0.2 &0.1 &0.3 \\ \hline \end{array} $$
$H_{0}$ is rejected when $X = 0$ or $1$.
So my type 1 error probability would be at $0.5$.
Which one is the type 2 error probability? On a simplified success/failure table I'd get something like this:
$$\begin{array} {|c|c|c|} \hline H_{0} &0.5 &0.5 \\ \hline H_{1} &0.2 &0.8 \\ \hline \end{array} $$
and I'd end up with $0.5 \times 0.2 = 0.1$ as type 2.
But if I stick with the original table and use $1-p_{1}$ as the values, I'd get something like $$0.1 \times 0.9 + 0.1 \times 0.9 + 0.1 \times 0.8 + 0.1 \times 0.9 + 0.1 \times 0.7 = 0.42$$
Which one is correct?
The Type II error is $$\Pr[\text{Fail to reject } H_0 \mid H_1 \text{ true}].$$ In other words, when you should have rejected the null, you didn't.
When $H_1$ is true, you read the probabilities from the $p_1$ row in your table. The rejection criterion is if $X \in \{0,1\}$. What is the probability of NOT meeting the rejection criterion when reading the $p_1$ row?